VR Locomotion Design Guide ========================== [Page 1](https://hackmd.io/@arpad-gabor/vr-page-1) | Page 2 | [Page 3](https://hackmd.io/@arpad-gabor/vr-page-3) Locomotion is how people move through virtual worlds. It’s one of the core design features of any VR application, as providing a comfortable and efficient locomotion experience is essential to the success of a VR project. Even in the simplest of applications, there can be a surprising number of situations where locomotion will have a major impact on the user experience. This guide is intended to help developers and designers anticipate and plan for these challenges while maximizing the quality of their locomotion design systems. Core Types of Virtual Locomotion: Physical and Artificial --------------------------------------------------------- The way people move through a virtual environment is usually referred to as Virtual Locomotion. We move through the virtual world, or the virtual world moves around us. Sometimes, both at the same time. When the camera matches the physical movement of the headset, and the perspective of the viewer in the virtual world matches the exact movement of the headset, we refer to this as “Physical Locomotion”. When the camera moves independently of the position of the headset, we refer to this as “Artificial Locomotion”. ### Physical Locomotion Physical locomotion is when movement in the virtual world is controlled by your movement in the physical world. For example, you walk, turn, or move through the virtual world by walking, turning, and moving in the real/physical world. With physical locomotion, the camera movement in the virtual world should match the exact movement of the physical headset. Even if your app is not designed for physical locomotion, it’s important to plan for what happens when people physically move in the real world. For instance, what if a user takes a step in physical space or leans forward while sitting? Would this move the virtual camera into solid geometry like walls, decorations or characters in VR? Planning for these scenarios will solve many potential comfort, performance, and usability challenges as you design your locomotion system. ### Artificial Locomotion Artificial locomotion is when movement in the virtual world does not directly correspond to physical movement. For example, when you walk, turn, or move through the virtual world in response to controller inputs, such as pushing a thumbstick. The most common use of artificial locomotion is to make it possible for people to move through virtual environments that are larger than their physical playspace; however, there are many other scenarios where it is necessary or useful to use artificial locomotion. For example, movement can sometimes be controlled by, or in response to, the environment, like an elevator or a roller coaster. Games that are designed primarily for physical locomotion can usually benefit from supporting artificial locomotion because it will make it possible for people with limited space or mobility issues to experience the content. Unless physical locomotion is core to the design, it is recommended to support artificial locomotion to make the application as accessible as possible. Additional Resources for VR Locomotion Design and Development ------------------------------------------------------------- To start your learning experience into VR locomotion design, we recommend starting with our guide to [Common Types of Artificial Locomotion](https://developer.oculus.com/resources/artificial-locomotion/). As always, if you’re looking to discuss any topics covered within this entire guide, feel free to engage the larger VR developer community within the [Oculus Developer Forum](https://www.oculus.com/lynx/?u=https%3A%2F%2Fforums.oculusvr.com%2Fdeveloper%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg). Lastly, to begin implementing your locomotion system we recommend checking out any of the following resources: - [VR Accessibility Design Guide: Locomotion](https://developer.oculus.com/resources/design-accessible-vr-locomotion/) - [Unreal + Oculus VR Development Course: Locomotion System Setup](https://www.oculus.com/lynx/?u=https%3A%2F%2Fwww.unrealengine.com%2Fen-US%2Fonlinelearning-courses%2Fvr-development-with-oculus-and-unreal-engine&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) - [Oculus Sample for Unity: Locomotion Sample Scene](https://developer.oculus.com/documentation/unity/unity-sf-locomotion/) - [Unity + Oculus VR Development Course: Locomotion & Ergonomics](https://www.oculus.com/lynx/?u=https%3A%2F%2Flearn.unity.com%2Ftutorial%2Funit-4-locomotion-and-ergonomics%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) Common Types of Artificial VR Locomotion ======================================== The list below describes common scenarios where the camera movement is driven by the application and not in response to physical locomotion. These refer to the higher level concepts of how and why the camera moves, as opposed to movement driven by controller inputs which is described later in this guide. ### Avatar Movement Avatar movement is when the experience requires you to move a character using some combination of thumbstick, button, headset, motion controllers, or gameplay states. This is the locomotion method used by a large majority of VR games today. It’s how people move through any first-person game where they are in direct control of their speed, direction of movement, and orientation of the camera at all times. ### Scripted Movement Scripted movement is when the virtual camera moves along a predefined path of motion. Sometimes, but not always, the orientation of the camera is part of this movement. A few examples of scripted movement include roller coasters, theme park rides, trains, and cinematic camera moves. ### Steering Movement With steering movement, the player is controlling artificial motion that continues to move without continuous input, such as driving a car. Typically, this kind of movement has inertia and momentum. Unlike avatar movement, steering movement prevents immediate starts, stops, or changes in direction. Examples include flight simulators and driving games. ### Environmental Movement Environmental movement is when movement occurs as a byproduct of where the person is, what they’re doing, or what else is going on in the virtual world. Examples include: - Falling off a ledge - Moving platforms or elevators - Explosion or other forces that move the player - Being pushed by a car - Being pushed by an NPC - Sinking into water ### Teleportation A teleport is an event that leads to a sudden change in the user’s perspective. The additional benefit of teleportation is that unlike other types of movement, they are not always a form of continuous movement. This can be helpful for people sensitive to the side effects of \[vection\](/(/resources/comotion-comfort-usability/#vection), since teleportation can prevent vection entirely. Teleportation can be integrated into the design in a variety of ways, for example: - Player-controlled teleports, where the user selects the destination within legal game space or from predefined destinations. - Teleports to dynamic locations determined by gameplay mechanics. Damaged Core uses this approach, as moving through the map is a side effect of taking over nearby enemy robots and having the camera shift to the new host robot. - Automatic teleports controlled by game logic. Often used to move people during major transitions or narrative events, which can be disorienting to users if they aren’t expecting it. ### World Pulling World pulling is when the user is stationary until they grab some point in the world and pull or push it. This action shifts the perspective as the world moves to follow the push or pull motion. A few examples include rock climbing, ladders, wall scaling, and zero-gravity movement. These techniques can be seen in many popular games, such as [Lone Echo](https://www.oculus.com/experiences/rift/1368187813209608/), [The Climb](https://www.oculus.com/experiences/quest/2376737905701576/) and [POPULATION: ONE](https://www.oculus.com/experiences/quest/2564158073609422/). ### Abstract and Creative Locomotion Techniques There are many other interesting and abstract ways to move people through an environment which are less commonly used, but can serve as the basis for compelling and interesting locomotion design. For example, the non-Euclidean environment in [Unseen Diplomacy](https://www.oculus.com/lynx/?u=https%3A%2F%2Fstore.steampowered.com%2Fapp%2F429830%2FUnseen_Diplomacy%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) allows people to move through connected spaces that couldn’t exist in real life because they would overlap. The shifting environments of [Sightline VR](https://www.oculus.com/lynx/?u=https%3A%2F%2Fstore.steampowered.com%2Fapp%2F412360%2FSightLineVR%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) delivers an experience where the world outside the field of view slowly evolves in a way that leads the player from a void, to a forest and eventually to a city as a result of merely looking around while sitting still. For more inspiration, check out the [VR Locomotion Vault](https://www.oculus.com/lynx/?u=https%3A%2F%2Flocomotionvault.github.io%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) which maintains a list of VR locomotion techniques and related information. What’s Next: Forms of Control and Input for Artificial Locomotion ----------------------------------------------------------------- There are still opportunities to invent novel techniques for locomotion that will lead to new and interesting experiences. We invite you to be creative and to experiment with the goal of discovering new and innovative techniques. To learn more about VR locomotion design, we recommend reviewing the [Forms of Control and Input for Artificial Locomotion](https://developer.oculus.com/resources/artificial-locomotion-controls/) guide. Artificial VR Locomotion - Controls and User Input ================================================== Now that we know the different [types of artificial locomotion](https://developer.oculus.com/resources/artificial-locomotion/), let’s discuss how to enable the user to control their movement within the virtual space. The purpose of any control system is to capture player intent, and doing so within an artificial locomotion system is no different. Capturing intent is particularly challenging in VR because of the many variables involved, such as head orientation, motion tracking data, avatar physics, and the user’s physical posture. There are many ways to control artificial locomotion, and some of the more useful ones are described below. Locomotion can be a significant contributor to VR motion discomfort, so it’s important to provide effective control of locomotion while maximizing user comfort. The \[Techniques and Best Practices\](/(/resources/comotion-design-techniques-best-practices/) section provides further detail on how to improve the comfort of the various control techniques described below. Thumbstick Driven Locomotion ---------------------------- Using the controller thumbsticks is a very common way to control locomotion systems. However, the thumbstick control behavior is usually more complicated than just moving the avatar in the direction of the stick. While there aren’t any rules for how thumbstick controls should behave, a number of design patterns have emerged which people have come to expect from certain types of VR applications. ### Direction-mapping When using thumbsticks, the direction the avatar moves depends on how the thumbstick directions are mapped to directions in the virtual space. People generally expect to move forward when they push the thumbstick in the upward direction, but what happens if they turn their head while moving? Should forward mean the new direction the headset is facing, or should it remain the direction it was when movement started, or perhaps forward should depend on the orientation of the controller? This is one aspect of locomotion control where people usually appreciate being able to choose their preferred method. Every mapping type comes with pros and cons in different scenarios. So, consider carefully how users will be interacting with your experience and choose your mappings wisely. It is paramount for both usability and to prevent motion sickness that the user can easily anticipate how the camera will move through the virtual environment in response to their head movements and controller input. Your experience should give users an opportunity to learn and acclimate to the direction mappings available to them, and ideally they should be able to choose the one that best suits them. Some of the useful variations in how forward direction is calculated during player movement include: Head-Relative: With head-relative logic, thumbstick input is interpreted relative to the direction in which the user’s head is facing. Pushing the thumbstick upward causes the avatar to move in whichever direction the user’s head is facing. Turning the head while the thumbstick is pushed in any direction will continuously change the direction of movement so that it is always relative to wherever the headset is facing. Initial Head-Relative: With initial head-relative logic, pushing the thumbstick forward causes the avatar to move in the direction the headset is facing when the user initiated the movement. Unlike head-relative controls, if the user turns their headset while pushing on the thumbstick, the direction of movement won’t change. For instance, if a user facing north pushes the thumbstick upward, they will move north, and until they release the thumbstick, they will continue moving north, even if they turn their head to look sideways. Controller Relative: With controller relative logic, pushing the thumbstick upward causes the camera to move in the direction the hand controllers are pointed. The headset’s orientation has no impact on the direction of user movement; it is always relative to the orientation of the controller. This allows someone to steer by simply turning the controller itself while leaving the thumbstick pushed in the same direction. Initial Controller-Relative: With initial controller-relative logic, pushing the thumbstick upward causes the avatar to move in the direction the hand controllers are pointed when movement is initiated. Unlike controller-relative logic, turning the controller while pushing the thumbstick will not change the direction of movement, which is similar to how initial head relative movement behaves. Controller-relative direction mapping is perhaps the most common approach today, however, the list above is not comprehensive. There are applications which demonstrate subtle but useful variations on these patterns in an effort to infer user intent more accurately and improve the experience. For many designs, supporting these different control schemes can be a relatively simple task, so we recommend providing these as options whenever possible. ### Turning Controls Artificial turning based on thumbstick input should be supported when it is compatible with the application design. The choice will be especially appreciated by those who are in chairs that don’t spin, use wheelchairs, are tethered to a PC, or simply prefer not to spin around. Everyone has different preferences and needs, and meeting these needs can be the difference between a user enjoying your experience or avoiding it all together. With this in mind, see below for the three most common artificial turning control schemes: #### Quick Turns The camera turns a fixed angle for each tap on the thumbstick. The angle of each turn per tap of the controller is usually 30 or 45 degrees, which occurs over 100 milliseconds or less. People generally have strong preferences for how this should be tuned, so it’s always a good idea to provide the option to tune both of these values. The goal is for the turn to be slow enough for people to keep track of the surroundings, but fast enough that it doesn’t trigger discomfort. It is important to register all taps on the thumbstick so that people can continue to provide turning input signals, even though a turn may already be in progress. The system should not disregard taps that occur while the camera is in the process of turning, and should immediately change direction if the chosen orientation shifts to another direction. This affords people the opportunity to begin a turn, and immediately turn back, or tap a few more times if they know exactly how far they want to turn. ![Visualization of thumbstick pressing down and player FOV moving to that direction smoothly and at a medium pace.](https://lh5.googleusercontent.com/fGMoFDrjmxMwz7cdDrJtPJZPDV_A2zNCdLAcIGt2lu3rYvl-wD-jzl5EvvVLca-QiyYMAGVl6AjPxoUeSf15hzqp4fDncjftKSjkAMZIhFJnuDfhkNxBZ_U5eWEX_rLZr6tIlaL9) #### Snap Turns Snap turning is similar to quick turning with respect to the accumulation of thumbstick taps to set the desired direction, but instead of smoothly turning over time, it will immediately face the final direction. If there is any kind of transition effect enabled that delays the immediate orientation of the player to the new direction, the accumulation of thumbstick taps is necessary so that multiple taps will always result in rotating a fixed amount per tap. It is a common problem for applications to disregard thumbstick taps when a time-consuming turn has begun. This historically leads to an interface that responds unpredictably to directional controls because people need to wait until the turn has been completed before triggering another turn, and if they don’t, the input signal would be disregarded. ![Visualization of thumbstick pressing down and player FOV snapping to that direction.](https://lh5.googleusercontent.com/PrKgGd9dY6e5TB1hE3ftceskrYUjSCwr7Ogc4YCPjOjr2xy9EsaF8fpbxZoKDoVPNNACMf8NVNH-9QwZwWmRMKJo8Mx96n689yoxa8a-NGb9sOuGPU0lPRYIR_SN4J4ioinXkety) #### Smooth Turns The camera turns at a speed relative to how far the thumbstick is pushed left or right. This can be uncomfortable because the movement in the field of view causes users to expect a corresponding sense of physical acceleration which doesn’t happen because they are not physically turning. Because turning starts as soon as the thumbstick leaves the center position, the angular velocity will vary based on precisely how far from center the thumbstick happens to be, which often leads to further discomfort. While smooth turning is considered uncomfortable by many, it’s possible to improve this behavior as described in the following section: [Improved Smooth Turning](https://developer.oculus.com/resources/locomotion-design-turns-teleportation/#smooth-turn-improvements). ![Visualization of thumbstick pressing down and player FOV smoothly turning to that direction in a relaxed pace.](https://lh4.googleusercontent.com/_v7u8UKRIsBuy97a_zMzcjqAJfBDpY9qharqhhTL6-VCRfwqTuKRJUGCb92eyEIvXrzzwspiQwHpLkNfMWVtU7RA1OXvpzTSAlGGvjRFsD61kbQFrKDfaL_ZAUKU9V9ncZcELAbP) Teleport Controls ----------------- The process of performing a teleport is a sequence of events that generally includes activation, aiming, potentially controlling the landing orientation, and finally, triggering the actual teleport. While it’s common for teleports to be triggered by a simple button or thumbstick movement, some designs will integrate teleport controls into the gameplay, which makes it possible to choose the destination by throwing a projectile, or some other mechanic unique to the application. There is room for creativity, but if you are looking to implement teleportation in a way that is familiar to many VR users, this section describes a few common approaches. ### Thumbstick-triggered Teleportation A popular approach to initiating a teleport is to activate the process when the thumbstick is moved out of its center position. An aiming beam will appear which people then point at the desired destination. Aiming continues for as long as the thumbstick is pushed. When the thumbstick is released, the player is teleported to the destination. Using a thumbstick to control this process makes it possible for the user to control the direction in which the user is facing after teleportation. While aiming at the destination, the thumbstick is used to control the direction of an indicator shown at the end of the teleport beam. When the teleport is completed, the player perspective will face the direction of the indicator. It is generally useful to provide a way to cancel the teleport by pressing another button or clicking the thumbstick, since releasing the thumbstick usually triggers the teleport. The tricky part of this technique is ensuring the act of releasing the stick and triggering the teleport does not change the landing orientation. One way to approach this problem is to carefully monitor the thumbstick input so that when it moves a small distance back towards the center position, the indicator direction will remain where it was last set. ### Button-triggered Teleports In some cases, the thumbstick on the dominant hand may not be preferred or available to trigger teleportation, in this case, you can use one of the standard buttons on the controller to initiate the teleport. The thumbstick in the user’s non-dominant hand can be used to control landing orientation when teleport aiming is active, or cancel the teleport by clicking down on this thumbstick before releasing the button that activated the teleport. Button-triggered teleportation also enables the dominant handed thumbstick to be used for other locomotion needs, such as controlling snap turns, although it is recommended to disable these snap turns if a user is pressing the controller button to teleport. Motion-Tracked Locomotion ------------------------- It’s possible to enable locomotion functionality using motion controllers without using controller buttons or thumbsticks for input. These techniques often consider posture, hand poses, and other physical movements as signals for controlling movement. Applications that use hand tracking (when controllers are not held), may find the examples below useful for controlling artificial locomotion. Because hand tracking is a relatively new type of input, there’s a big opportunity to pioneer new best practices in this space. ### Simulated Activities Simulated activities map the user’s physical movements to a real-world or imaginary activity. The intent is to mimic the physical movement if you were to perform these activities in a real-world setting. From swimming and running to flying like a superhero, pretty much anything you can imagine is possible. Running: Detecting when the user’s arms swing like they do when people run can be used as an input signal to make the avatar run. Paddling Motion: Tracking the movement of the user’s arms as they move in a paddling motion, as in propelling a boat. [Phantom - Covert Ops](https://www.oculus.com/experiences/quest/2302118823192509/) by [nDreams](https://www.oculus.com/lynx/?u=https%3A%2F%2Fwww.ndreams.com%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) is an excellent example of paddle-based locomotion. Simulated activities are amongst the more immersive ways to control locomotion. Some of the most exciting surprises and successes have come from games that replicate real world activities within a VR experience. ### Pose-Driven Controls While the simulated activities outlined above require users to move how they would expect to move in the context of the real-world experience, pose-driven controls use learned poses and abstract gestures to control movement. Pose-driven controls are a bit more abstracted from the intended activity, and usually require some training or guidance so people know what motions cause each effect. These poses and gestures can take many forms, one example is to point with one hand and initiate the movement with a gesture on the other hand. Widely-adopted standards haven’t yet emerged for pose-driven controls, which makes it a space ripe for experimentation. Here are a few useful tips if you’re considering experimenting with abstract gestures: - Ensure that your app detects the pose reliably every time. This is critical. - Provide consistent, reliable feedback for when a gesture is detected, so the user knows their intent has been understood by the system. - Provide clear guidance for how to use the gestures that are available. - Keep gestures simple enough to memorize so they can be used by the user with minimal cognitive effort. - Minimize the number of supported gestures, making them easier to use as a reflex. Seated Control Considerations ----------------------------- The goal of supporting a seated mode is to simulate a standing experience while seated. Fatigue is a real issue for many people. Even when an application isn’t clearly designed as a seated experience, many people will minimize their movement during long sessions, as long as it doesn’t detract from the experience. Seated mode is also important for accessibility. It is recommended to support a stationary, seated position unless the design is fundamentally incompatible, because this will allow more people to experience your content. When seated mode is supported, there should be messaging early in the experience (possibly within the tutorial) that enables the user to choose between seated and standing modes. If seated, the avatar height, and in turn the camera height, should be set to an elevation that defaults to the player height reported by the system. This height should also be customizable within the game options. If the height were to be based simply on the headset’s elevation from the physical floor, avatars would be shorter and the user perspective would not match what they see when standing. When people are physically seated, additional controls may be required to provide the full range of movement in VR that would be available if the user was standing and walking around the play space. Support for artificial turning is necessary because people won’t always be in a chair that spins. If the experience benefits from physically ducking or crouching, it will be important to provide controls to toggle crouch, and any other poses that are required to access specific content as effectively as someone who is able to physically move around the play space. It’s useful to think of the transition between standing and crouching as a form of artificial locomotion, and to be aware of the comfort risks related to moving the camera up and down like this. It’s recommended to make the transition between camera elevations as brief as possible. The camera movement shouldn’t be instant, because this would lead to a visual discontinuity that can trigger disorientation. Also, people don’t crouch instantly so it would appear more natural to an observer. If the transition takes too long, this can also become a trigger for discomfort. What’s Next: Design Challenges + Accessibility Design Guide ----------------------------------------------------------- Especially in regards to seated control systems, design for accessibility is essential to locomotion, and VR app design as a whole. We recommend checking out the full guide on [Accessibility Design](https://developer.oculus.com/resources/design-accessible-vr/), especially the section on [Accessible Locomotion Design](https://developer.oculus.com/resources/design-accessible-vr-locomotion/). If you’re looking to learn more on the higher level topic of locomotion, we recommend checking out the following section that covers [Locomotion Design Challenges](https://developer.oculus.com/resources/locomotion-design-challenges/). Design Challenges ================= Designing for physical or standard avatar locomotion can lead to environments with limited architectural designs, for example those which work well with moving across a continuous floor or within a fixed area. This can lead to designs that feel limited or constrained when compared to non-VR games. Common examples are ladders, gaps, and ledges. Special logic and planning is required to support locomotion across features like these. This will add scope to your project, but the right approach to these scenarios will enable more creative designs due to the increased freedom of movement. The Importance of a Test Environment ------------------------------------ It’s important for the environment to be designed so that it works with the locomotion system. Every locomotion system will have limitations, and the environment needs to be built in a way that allows people to move effectively within these constraints. It is helpful to plan for this early in your development process by identifying what kind of locomotion behaviors the design requires, and creating a test environment that makes it easy to quickly iterate, testing all aspects of the locomotion system. Features of the test environment will typically include a variety of slopes and materials, doorway dimensions, step heights, path obstructions, tunnels, ceilings of various heights, ledges, ladders, and so forth. Your test environment will ideally include every possible scenario that can test the compatibility of the locomotion system. Many teams have been surprised to see just how much work it takes to guarantee their locomotion system is reliable and comfortable, and streamlining this iteration process is always worth the effort. Learn more about playtesting and ways to improve your iteration time in the following Oculus Developer Guides: - [VR Playtesting Guide](https://developer.oculus.com/resources/playtest-guide/) - [Efficiency and Iteration Time Best Practices](https://developer.oculus.com/resources/maximize-efficiency-minimize-iteration-time/) Beyond Walking: Climbing and Leaping ------------------------------------ In many designs, the environment needs to include areas that are not accessible by walking alone. Moving through these areas requires another form of movement, such as teleportation, climbing, or leaping. Short-range teleportation can be a useful tool for navigating discontinuities in the environment, which in turn provides level designers a lot more flexibility in their map layouts. For example, a short-range teleport combined with a warp mechanic can feel very much like a leap over a gap or ledge. This mechanic can be an effective way to navigate past areas which might otherwise require cumbersome or non-intuitive actions from the user. Even if the majority of the game is designed for use with standard avatar motion, supporting the use of short-range teleportation to move through certain parts of the terrain is an effective way to support more variety in the environment. #### Climbing Climbing in VR can be an especially challenging design problem. For example, if someone is climbing a rock wall, what should happen if they physically move away from the wall? Here are some common climbing scenarios to consider: Transition to climbing surface: It’s easy to imagine virtually moving up next to a ladder or rock wall, grabbing it, and moving upward. This scenario gets especially complicated when the ladder (or other climbing surface) isn’t approached from straight ahead, or from above. Traditional flatscreen games will often take control of the camera from the player to reposition the avatar onto the ladder. In VR, overriding the camera orientation and position should almost always be avoided because of the potential comfort issues, but not forcing the orientation leads to another set of problems related to how the character interacts with the ladder and how they appear to other players. These challenges are closely related to those that take place when the user turns while climbing, discussed below. Turn while climbing: If someone turns away from a ladder, either by physically turning or by artificially turning, the grip on the ladder may need to break in order for character visuals to make sense. For example, if someone is on a ladder and snap turns 90 degrees away, what should the hands do? If they remain attached for as long as the grip is held, the avatar may wind up in an unrealistic pose which only gets worse as they physically move from the ladder. Depending on the design, it could make sense for them to disconnect and fall, or require the avatar to remain attached to the ladder and force the orientation to remain in the direction of the ladder. Be sure to recognize the comfort risks for each of these options., For example, if you choose to force the avatar to remain facing forward on the ladder, this will lead to a sense of vection because the physical movement away from the ladder will require artificial locomotion to keep the avatar in place on the ladder. There are many possible ways to respond to these situations; however, if it doesn’t have a negative impact on gameplay, it is often a good idea to simply warp-teleport the player to a safe place near the top or bottom of the climbing area. ![Left: VR player and associated avatar in space suit climbing ladder. Right: Same player turning to look down from ladder and associated avatar with arms stretched awkwardly.](https://lh5.googleusercontent.com/Am6qhTCyfedVjZTRemqKJ7y64F2k4xpKCjy96srPPVqGw_rhDKKu-S-jEW_6HsSIA7oZUk0zm8hFn1b67czar2wMPQFZ2fjMNSOPBiflcCRRhlPcjsBM3hKzRG3hRAwCeIffCdDJ) When climbing, avatars are often attached to virtual ladders or other surfaces. Since there is nothing preventing someone from physically turning or stepping away from the attachment, it is necessary to decide how the avatar should respond when this happens. In this case, the arms are still attached to the ladder because the player has moved away while keeping the grip buttons pressed. Transition from climbing surface: It can be difficult to design for the instance when the user needs to climb off a ladder that ends at a roof line because the avatar will need to continue to move up and above the top of the ladder, then forward to get onto the platform. This act of ledging is generally best accomplished with a rapid, scripted movement that transitions the user to an appropriate position nearby where standard locomotion is available. Climb down: In VR, as people climb down a ladder, it can be difficult to know when to stop. What happens if their feet have reached the ground, but they continue to climb down? At some point, the locomotion system needs to detect this scenario and prevent further downward movement. #### Alternatives to Climbing Due to the challenges related to climbing, it may be a good idea to provide a teleport mechanic that avoids the need for hand over hand climbing entirely. For instance, one option is to enable the user to walk up to the ladder, and provide some means by which the user is able to teleport to a spot near the other end of the ladder. #### Jumping Over Gaps If you have a gap that a user must get over, a teleport with a warping mechanic can make for an ideal leaping opportunity. Even if you intend to focus on physical locomotion or standard avatar movement throughout your app, teleports are still a useful mechanic for environments that feature ledges, ladders, crevasses, or any other gaps the user must go over. Cameras in Geometry ------------------- One of the most important decisions regarding camera behavior in VR design is whether or not the camera will collide with the environment. It’s generally easier to implement cameras without collision; however, the application design may require cameras to collide with the environment. Both approaches have implications to the usability, design, and comfort of the application, which are all discussed in more detail below. ![VR player looks through wall, shows player capsule on one side and player avatar viewing a wooden box on the other side.](https://lh6.googleusercontent.com/Z_O_TeBXawvUZJhtKpihuU6H5-OIRUfHMqiw4r01cb_4tExmduls6U7xq58_zgtKiugzQqsew4oBiWC56Tppi6CUXIVAnuw09zNHiQlWytddSKgt0HFrw7RTqzCk4rpAwhVH7XSk) #### Cameras without Collision If the locomotion system doesn’t prevent the virtual camera from moving through objects in the environment, such as walls and doors, be sure to consider the following challenges: - Narrative information that you want to keep hidden may become visible by physically moving the headset into a locked area. - Interactions into neighboring rooms may become possible. - Virtual movement follows physical movement, which prevents designers from creating inaccessible areas because people will move however they want. Some solutions for these challenges include: - Enable collision between the camera and the environment. - Disable rendering of the environment when the camera is not in a valid space. For example, pushing the virtual camera into a wall may cause the display to show a blank screen instead of the next room (or a fixed frame of reference, discussed below). This can be problematic if someone has physically locomoted a few steps out of valid space, because they may not be able to easily find the valid space if they only see darkness; however, this can be addressed by activating visual effects to guide them back to a valid space. #### Cameras with Collision Camera collisions are sometimes necessary to meet design goals, but they can also lead to uncomfortable moments for the user. When someone physically moves and there is a collision within the VR experience, the camera needs to stop, but as nothing will stop the user’s physical movement in the real world (assuming they remain within guardian space), the player will see the wall moving away from them as they attempt to walk into it. This example will produce potentially uncomfortable vection for the user, and it is unavoidable if the camera is unable to go through virtual walls or any other objects. However, with careful planning the effects can be reduced. It’s useful to consider objects moving in the environment such as swinging doors or projectiles that could collide with the camera as the user moves through the virtual environment. These collisions need to be considered and it is generally recommended to not move the camera in response to moving objects because if every small object can block camera motion this can lead to unpredictable vection events. One approach is to configure the collision system so that the camera only collides with stationary environmental objects, or those objects that absolutely must move according to their scripted requirements, such as elevators. When independently moving objects, such as doors or projectiles, collide with the camera, they should either move away from the camera or ignore the collision in a way that doesn’t cause rendering artifacts. In either case the camera should be unaffected by these collisions. If the object moves away from the camera, this can lead to situations where a movable object is squeezed between the camera and fixed world geometry, which can cause instability in the physics simulation. Objects in this situation will usually resolve the collision by popping out somewhere nearby, but they may also get moved outside of the valid environment, so care should be taken to make sure the object winds up in a reasonable location. A common solution for this scenario is an out-of-world position check which either destroys or resets the object to a known safe location. Peek Over Ledges ---------------- If a player is near a ledge or cliff, it is natural for them to lean over and look down. The design problem here is that virtual movement typically follows the physical position of the headset. That is to say, if the headset moves, the character position typically moves to follow the headset (depending on collision behavior discussed above). If the headset moves over the edge of a cliff to look down, there is a risk that the avatar will fall off the edge of the cliff by accident. This problem is most common in situations where the character capsule has a small radius. A simple way to address this problem is to simply increase the width of the capsule which reduces the risk of the user accidentally falling. While this is a quick way to solve the problem, the larger collision capsule may not be suitable for movement in the rest of the environment. ![Left: VR player on ledge with associated monster avatar and player capsule. Right: VR Player looks down from ledge expressing surprise. Monster avatar falls off ledge and capsule moves slightly over and down off ledge.](https://lh4.googleusercontent.com/QSBki16lcOdXazkmqqf3lQ_bnZZkpUPlf-77O2hEWAbSUFkhefqZ_ZB27tbJD6plhwJrD_QrW7r4veHHZuTWjFmZGX6zuSw_dmnqrOGPWoHzffEBZWiwjT3_ehfy2UAlbGvVR7Lz) When the movement capsule is locked to the headset position, leaning over a ledge to look down can cause the avatar to fall over the edge. A better way to prevent accidental falls is to constrain the character capsule to the HMD position with a short positional constraint (think of it as a leash) so that the HMD is essentially pulling the capsule around behind it. With this technique, the user can move the headset a short distance without moving the character capsule, which provides more control near ledges because the HMD can be moved over the edge of the cliff without dragging the capsule off the ledge. It is important that the HMD has its own collision shape, typically a sphere, so that it can detect and respond to collisions with the environment. Lean Over Objects ----------------- The avatar’s collision capsule is often constrained to the headset position. Generally this means as the headset moves around the physical space, the collision capsule follows the user’s movement within the virtual environment. When the collision capsule hits something, such as a desk or other object taller than the maximum step-height, it can prevent the virtual camera from moving forward. This can lead to situations where people are unable to simply lean over the object as the character capsule collision treats this scenario as if the user is walking into a wall. The solution here is to implement the same positional constraint behavior described for ledges, above. This will allow the user to lean over the object as far as the constraint limits allow, while the character capsule remains in the correct position next to the object. Adjust for Height ----------------- In some designs, the ceiling is so high that the height of the collision capsule is not relevant, but it is important to plan for scenarios where there are objects in the environment low enough to collide with the player’s head. It’s not uncommon for applications to use collision capsules that have a fixed height, and the content is tuned for that height so that the user can move through areas in a consistent manner. This might seem like a good idea, but this can be problematic when someone is taller than the standard capsule, causing the camera to go into the ceiling. Similar issues are also present for people who are shorter than anticipated, which could lead to situations where someone can visibly walk underneath something but is prevented from doing so because the collision capsule is taller than the user. One solution is to adjust the avatar’s collision capsule height to match the headset’s offset from the ground. This has the useful side effect of enabling the user to crouch under obstacles, and prevents overhangs from unexpectedly preventing movement. Depending on the design of your application, it may not be necessary to support this, however, it can be useful for some designs because it will enable an experience that requires the user to crouch to move through crawl spaces and other similar areas. Keep in mind that dynamically adjusting the height of the character capsule can be tricky, for instance, if someone stands up while the ceiling is blocked, the locomotion system will need to continue to behave in a way that is still usable and comfortable. A hard collision with the ceiling might result in the user physically standing upright while moving through a space that only has room for crouching, and when they leave the confined space they may remain crouched in VR unless the system detects this state and restores the capsule to the ideal height. It may be simpler and potentially more comfortable for the camera to move without collision, and respond in the same way as when the camera moves outside of the environment, which is discussed in the Cameras without Collision section above. Just remember that this will impact the user’s sense of immersion as poking the camera through walls is not as realistic as being blocked by them. ![Left: VR player stands upright with capsule sized to fit. Middle: VR player in squat position, with capsule still as tall as the upright player. Right. Player avatar squats and player capsule runs into top of doorway preventing passage.](https://lh3.googleusercontent.com/YXiIgP1oXcGcrRvHtCAHIZ99Et93Q3dkyWcS6I1w7JANrGHuzwUcfzX8WgsCerwHtzy_LhtEc_R8beRnzpF4b2_-eyOA2Z0vf0vjUrp0J0-pX3pqCB-dIvDJRVj_BwKQ8Rh_qMPC) Depending on the environment, the player capsule may need to shrink in order to enable access to areas with low ceilings. How to Minimize the Side Effects of Teleports --------------------------------------------- Teleports are difficult for an observer to follow and problematic for non-player-characters (NPCs) trying to reach the player. If an NPC is trying to get into someone’s melee range, the player will be able to break the game design by simply teleporting right before the NPC reaches them. Games have addressed these challenges in different ways, for example implementing a stamina pool to limit how often people can teleport. Unfortunately this can feel unrealistic. In regards to multiplayer experiences, the environment might be filled with other players moving around the environment unpredictably as they teleport. The user might observe other players standing still, making slight movements due to physical locomotion, or appearing/reappearing in random locations. All of this activity can make the user lose their sense of continuous movement, and as a result, the experience feels much less immersive. ![VR player looks left/right in confusion as they try to locate a second player who teleported from a position to the left, to a position on the right.](https://lh6.googleusercontent.com/GoDkmzf-I7QPDuZYpSb4SajUkoaTuEywKtXqbq3cyztNZ8LKbVqiai_wAhlScXNE5ceDklMi99hmXrJsCndDXeWvRUE6UhKOjHyaa1VLob5M4Ha-m3i1Ld8FrhwUaMcj12lYJ0ON) Users generally want to understand where the other players are within their space, as opposed to them simply disappearing and reappearing around the map. You might display some sort of visual effect to indicate player movement which can help inform observers where people are moving, this helps with multiplayer experiences, but does not address the problems related to NPCs. An effective solution for both problems is projected avatars, where teleportation is the result of navigating the user’s avatar to a new location in third person project, and when the location is selected, the user’s first-person perspective changes to that new location. All observers, both human and NPC, view the projected avatar as if it was moving through the environment with natural and continuous motion. This neatly solves both multiplayer and NPC related issues associated with teleportation. Projected avatars are also useful for improving comfort and usability as discussed in the Improved Turns and Teleports section. What’s Next: Comfort and Usability Considerations ------------------------------------------------- Now that you have established your understanding of the many potential challenges that come with designing a locomotion system, we recommend grounding yourself in those potential issues that can arise with user comfort and usability with the following section: [Comfort and Usability Considerations](https://developer.oculus.com/resources/locomotion-comfort-usability/). This page not only outlines the most important considerations, but features an outline of which locomotion types and techniques you may want to leverage as you design your locomotion system. Comfort and Usability ===================== It’s important to consider comfort and usability when planning your locomotion system. This guide discusses techniques related to making locomotion as comfortable as possible for your VR app users. Comfort Risks ------------- A comfortable VR experience is generally achieved by minimizing sensory mismatches and discontinuities with our real world experience, and getting as many of our sensory processes to agree as possible. In this section, we provide the key comfort risks you should consider as you design your VR app for maximum user comfort. ### Vection Vection is the illusory perception of self-motion based on visual input consistent with such movements. This occurs most commonly when using artificial locomotion to move or turn in the virtual environment. The brain weighs the input from your sense of vision so heavily, you may even feel as if you are moving despite actually staying still. Discomfort can arise when vision conflicts with information coming from your vestibular sense or your sense of proprioception. ![VR player stands in place with hands slightly pointed out to the right while avatar runs in same direction.](https://lh6.googleusercontent.com/AR_6wMblz-B-bJOcx2qqi7iSl8mTO7B0Mx37vdtPCphqzIiSI_wmmO0CBhg7jGdDdzIq4VB4pKnUecC1QOQeco9oxurIqwWjf52P0Xm176tRExDBBIfim3CD5Sk4PZzZvlfxuTog) ### Vestibular Sense Vestibular sense, often referred to as your sense of balance, is detected by a set of structures in your inner ear known as the vestibular organs. These work like a biological inertial measurement unit, similar to the one in your headset, to detect the direction of gravity and any other acceleration of your head along the six degrees of freedom of motion: yaw, pitch, roll, and x-, y-, and z-axis translation. It is important to note that this sensory system specifically responds to any change in the head’s motion vector, either rotational or translational. This has two important implications: increasing and decreasing speed in any direction will stimulate the sensory receptors in the vestibular organs and the vestibular sense no longer signals motion to your brain when moving at a fixed velocity. This is because after moving at a fixed velocity for a sufficient period of time, the vestibular organs of the ear achieve constant momentum and no longer get stimulated. This is why many people find constant velocity motion in VR relatively comfortable. ### Visual-Vestibular Mismatches and Comfort Disagreements between your vision and vestibular system can lead to motion sickness. If you’ve ever gotten carsick from reading a book in a moving vehicle, or seasick on a boat when you couldn’t see the horizon outside, you have experienced the effects of a visual-vestibular mismatch. In those cases, vision tells your brain you are standing or sitting still while your vestibular sense tells your brain you are moving. Vection in VR creates the opposite but similar discomforting experience: vision says you are moving, but your vestibular sense says you are staying still. While different people have varying susceptibility to motion sickness, it is common for people to experience some degree of discomfort as a result of visual-vestibular mismatch when using VR, particularly when it is a new experience for the user. This can discourage them from returning to your experience or even turn them off of VR in general. Therefore, we highly recommend preventing or mitigating visual-vestibular mismatches as much as possible. ### Proprioception Proprioception refers to the perception or awareness of the position, movement, and extent of the body and its constituent parts. Your brain calculates this complex representation from sensory information about how different muscles are contracted or relaxed, the degree of flexion in different joints, and how touch receptors in your skin are or aren’t being activated. Proprioception is why you can, for example, close your eyes and touch your nose and otherwise know where and how your body is posed in space. When the virtual representation of our body doesn’t match the mental model or perception of our physical body, this creates a visual-proprioceptive mismatch, which can be uncomfortable and negatively impact one’s feeling of immersion. Some common proprioceptive mismatches include when head or hand tracking doesn’t occur reliably, has too much latency, or poor inverse kinematics (IK) cause the user’s avatar body to take on a pose different from what their real body is doing. These are just some of the ways that what we see in VR may not match what we feel. ### Disorientation Disorientation occurs whenever the user loses track of their position in their environment. This can happen most commonly when the camera perspective suddenly changes significantly, and it takes a moment to reorient oneself within the world. This is associated with teleportation, snap turns, and any other discontinuities in the camera position or orientation. ![VR player in adventure environment. Player teleports across level but is disoriented because the direction they’re facing has changed after teleporting.](https://lh6.googleusercontent.com/dhr0422d7D9l3DlgRkw6uq-5gFQ9AihCmpCB2uRXFLGezioJWUDZ3bizkSifecen4TDWty3SN4dtPbiFVrDpThLS-MFq6tGwLpAIy1Velbznni-BBDDugs9F7RdY3ui_W-mMzkWZ) Usability Issues ---------------- When building a locomotion system, it’s important to plan for the users who will be using it, their needs, and other individual factors that can affect how the locomotion system will work for them. ### Space Limitations While it may be tempting to simply require that users use a play space large enough to fit your virtual environment, this will drastically limit your potential audience. Some users need to play in stationary mode because they have just enough space to use VR so long as they stand or sit in one place. In practice, this means that unless the application is designed to be played from a stationary position without turning, it will need to support some kind of artificial locomotion or turning in order for everyone to be able to access the entire virtual environment. ### Fatigue Relying on physical locomotion to move around in your app can lead to user fatigue. Users sometimes start with a more physically active play style, but switch to a more relaxed style as gameplay progresses. While designing for continuous physical movement is not necessarily a bad thing, it should be a deliberate choice in your app design because it can limit the duration of play sessions. For some people, it may determine whether they can experience your app at all. ![VR player leaning on couch, showing signs of fatigue.](https://lh3.googleusercontent.com/PDn5VtWYZT_BevXfE557ass-Sl9YOHQt6aSeZLKZp1GqZLxJsfNPPqPitVxgmrcu__iZjb9VuIzsY0RREjT-IGli4GYPiTReWPqFkmLzsPzR14we-DxF5R0HV4j-Qreagv2hc1tt) ### Accessibility Accessibility is important to consider when designing any experience. Some people may have physical needs that require them to stay seated, and others may simply prefer to stay seated during their VR experience. This can make certain physical actions frustratingly difficult or even impossible to execute. It can be helpful to consider a user’s physical limitations as they pertain to your controller system, how much dexterity will be required to perform certain actions, and how hand tracking might improve your app for people who have difficulty using hand controllers. Check out the full guide, [Designing for Accessibility](https://developer.oculus.com/resources/design-accessible-vr-design/), as this is a significant topic that impacts how you approach your app design. ![Left: VR player sitting in wheelchair. Right: VR player sitting on couch.](https://lh3.googleusercontent.com/eR8wCLRMXAm1nO0khwzX4bJud5aIHuXK6FCH1S3nKdLObpPV4U6aRgAiUzuZLfB50fleUvuLswym_H15RC8kVHSfouSJ2bh7XV2P2rfEWRGwq108184JCIYolhFQQkozrX-tzcT4) The Importance of Predictability -------------------------------- People are less likely to have issues with discomfort if they are able to reliably predict how the camera will move through the virtual environment. This is one reason why we emphasize consistent, predictable control schemes and movement patterns here. Besides allowing the user to be more efficient in their artificial locomotion and experience less vection, visual accelerations that the user can anticipate will be less discomforting than ones they cannot. A first-person experience can be more comfortable if the user is controlling a visible avatar in the virtual environment that telegraphs how the camera will move. For example, the avatar might start walking in some direction before the camera starts moving to follow them. When the avatar stops, the camera would then catch up and decelerate to a stop soon after. The avatar can give the user a signal of what the camera will be doing in as little as a few hundred milliseconds and sufficiently allow them to anticipate the changes in motion to a degree that will benefit comfort. Similarly, turning controls that always respond to input exactly the same way regardless of what else is happening are more predictable than those which disregard input when in the middle of a turn. When considered individually, these may seem to be relatively minor features, when everything responds consistently and predictably, the overall experience is likely to be more comfortable for the user. What’s Next: Design Techniques and Best Practices ------------------------------------------------- With the considerations and challenges outlined above, and the previous page on [potential design challenges](https://developer.oculus.com/resources/locomotion-design-challenges/), it’s time to start thinking more tactically about how you will design your locomotion system. The following guide and sub-sections cover details on several design techniques and best practices that are implemented within the VR ecosystem: [Locomotion Design Techniques and Best Practices](https://developer.oculus.com/resources/locomotion-design-techniques-best-practices/). Locomotion Best Practices ========================= A number of useful locomotion techniques that improve comfort in VR have emerged over the years, thanks to the effort of many developers, innovators, and researchers. This section describes some of the more widely used techniques to maximize comfort, usability, and accessibility. ### The importance of supporting multiple techniques People have different physical needs, control preferences, and responses to discomfort triggers. It’s important to test your application with as many people as possible to learn about user expectations and identify any scenarios that are likely to cause discomfort. This will help you select locomotion options that will allow your audience to move the way that works best for them. ### Simplify the experience for choosing comfort options While it’s important to provide useful options for locomotion, too many can overwhelm the user. It’s a good idea to offer a choice between recommended, comfortable, and advanced settings when the application is launched for the first time. - Recommended: Settings your team believes to deliver the best overall experience. - Comfortable: Settings that provide the most comfortable experience. Users generally know their tolerances for comfort in VR, and choosing between this and recommended settings will be an easy choice for most of them. - Advanced: Settings that provide a full range of customizations. Mostly for users who have specific preferences for how to control their movement. Users often have strong preferences for how they interact with virtual environments, and they may share negative reviews if the app doesn’t provide the locomotion features and options they prefer. While there is still much to discover about locomotion and comfort in VR, there are some control systems and comfort features that users have come to expect. Therefore, it is important to support them whenever possible. Locomotion Turns and Teleportation ================================== For each of the techniques outlined below, we recommend offering controls within the application settings to customize timing and related values for each parameter. People have widely varying reactions to instances of sensory mismatch. One of the easiest ways to improve the user experience is to provide the user control over these key variables, as it allows the user to choose the balance of responsiveness and comfort that work best for them. ### Blinks Blinks can help improve comfort during teleportation. With this technique, the screen usually fades to black, masking the change in camera perspective. The preferred duration of this effect does vary, so it’s recommended that you give people the option to adjust the timing of the effect.![Snap turn multitap](https://lh5.googleusercontent.com/Ixc84EH0wJBShIgOf2MuuLrPTwtp4BlWDLuZHRWbdtMRrwM5mFERW9a1C6OUiZCPlQu0wEtN_PtR3gDazBOv_7hxlh9rIRm70ClnHahz6uEO64Qyv9aX-Qf-aq1whJ0PNTkUBGcW) ### Snap Turns Snap turns reduce vection during avatar movement by instantly turning the camera a fixed, angular distance. They are also often paired with a “blink” effect (of tunable duration) to reduce - but not eliminate - the disorientation that results from the instant change in perspective. If there is a blink, or some other transitional effect that takes time to complete, it is important to accumulate additional turn events so that user intent is respected and the camera will orient to the final direction. People should not have to wait for a snap turn’s transition to finish before triggering another turn. It is important to note that snap turns work by allowing the user to rotate their viewpoint without seeing the associated motion that normally goes with it, thereby preventing vection. However, if the user is able to turn eight or more times within a second, this will trigger motion perception in their brain, and they will experience vection. ![Visualization of thumbstick pressing down and player orientation moves to that direction smoothly and at a medium pace.](https://lh4.googleusercontent.com/8WPOp3j_M94Csthn72IdaxP95hiCsTBk7hDlj0iIcRUvfu9xpGCiF84XMmXpMYZYebSDxwRukZEy85nmv-zRKCY5JFVyv8cyfrhtmWAPY5RkE6yRfXIbKryQcMraXpiJV7QOeuGU) ### Quick Turns Quick turns can reduce the effects of angular vection and disorientation during avatar movement. During quick turns, the avatar turns a fixed angular distance over a very short amount of time, usually 30 or 45 degrees per tap on the controller for under 100 milliseconds. We also recommended that these values can be customized by the user, if needed. Because it happens so quickly, the rotation will be completed before our body has a chance to fully react to the difference between the visual and vestibular signals. Since there is no visual discontinuity, the user’s sense of direction should be well maintained after the turn is completed. ### Usability and Quantized Turns Snap turns and quick turns rotate the user’s viewpoint in discrete increments. While this can improve comfort, it also reduces the precision with which one can reorient their viewpoint, which can potentially lead to frustration. If it is important for the user to orient themselves precisely in your virtual environment, you should provide some means to fine-tune user orientation. For example, if you know that users will want to adopt a certain position and orientation in the environment, such as to directly face a virtual screen, you could offer them some way of snapping themselves into place. While users might also orient their viewpoint precisely by simply turning their heads, be sure to consider their potentially limited range of motion, and how long they might have to hold such positions. ![Visualization of thumbstick pressing down and player orientation smoothly turns to that direction in a relaxed pace.](https://lh4.googleusercontent.com/VDiRhMUYnsgLvZ7EaZUTmOg6ClUChXx4daPBb-6UxmGIXsvGmcXL-yVw8CkUG2E3wCoNZ3CJM6ot7ZmG9obIsSPQJmxU6ZmZdPs9_sIdLJ7Z7Utl4U_dPr0LuLY2a_ystuGoGMS-) ### Improvements to Smooth Turn Design Many, if not most, people will find simplistic implementations of smooth turning to be uncomfortable. Despite this, it is recommended that you provide a smooth turning option for people who have a strong preference for this control type. There have been many examples of VR applications shipping without smooth turns because of the developer’s desire to provide the most comfortable experience, only to add smooth turning as a result of customer feedback and poor ratings. We also recommend that you require users to select this mode intentionally to avoid anyone assuming smooth turns (and the potential, associated discomfort) is required to experience your app. With careful tuning, the smooth turning experience can be improved. For instance, limiting the turn speed to a rate that is reached shortly after the thumbstick is moved will reduce the duration of angular acceleration. For instance, when the thumbstick X value is between 0 and 0.25, the angular velocity will be in the range 0 to 1, and when the thumbstick X value is above 0.25, the angular velocity will be 1. As demonstrated by [Stormland](https://www.oculus.com/experiences/rift/1360938750683878/) from [Insomniac Games](https://www.oculus.com/lynx/?u=https%3A%2F%2Finsomniac.games%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg), with careful attention to the details, the comfort of smooth turns can be improved. ### Position Warp A position warp effect is experienced when the camera quickly moves to a new position as part of a teleport. It can be helpful for reducing disorientation because it eliminates the visual discontinuity that would happen with an instant teleport. It’s important that this effect is very quick and uses a fixed velocity to minimize the visual and vestibular sensory mismatches. This technique is the same as what is described in \[Seated Control Issues\](/(/resources/artificial-locomotion-controls/#seated-control-considerations) . ### Teleport with Projected Avatars Projected avatars is a technique that enables the user to steer their avatar to a new position in third person. When activated, the camera view in VR remains stationary while the avatar can be seen walking away from the current perspective. Once the movement is complete, the camera perspective shifts over to match the avatar’s final position. This tactic eliminates vection while moving the avatar because there is no artificial movement of the camera. It can also help reduce post-teleport disorientation because people can view the avatar move to the new position prior to the shift in perspective, and more effectively anticipate what they will see after the camera moves. It also has the added benefit of eliminating the teleport-related discontinuities for other players and NPCs. This can be useful for app designs that would break or be exploited if objects don’t respect physical rules of movement, such as a player trying to evade an NPC. A great example of projected avatars can be seen in [From Other Suns](https://www.oculus.com/experiences/rift/1226573594029268), by [Gunfire Games](https://www.oculus.com/lynx/?u=http%3A%2F%2Fgunfiregames.com%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg). ### Teleport Portals This technique presents a portal or window to the new location next to the player, showing a portion of the environment from the perspective of the new location. The teleport takes a few moments to complete, and during the transition the portal simply expands around the player, which effectively places them into their new location. This reduces the visual discontinuity associated with other forms of teleportation, since both the current and new location are visible during this process and the visual transition between the two is not instantaneous. Minimize Acceleration ===================== When acceleration takes place in VR, we experience it within the headset in the form of visual acceleration. Because we aren’t physically accelerating, the vestibular sense and proprioception don’t provide a corresponding sense of motion. This mismatch can lead to discomfort, and so it is useful to provide features to minimize or eliminate this disparity. As a reminder, acceleration refers to any change in the camera’s/head’s motion vector, and therefore includes any increase or decrease in velocity along yaw, pitch, roll, or x-, y-, or z-axis translations. Starting or stopping rotation or translation in any of these degrees of freedom is experienced as a form of acceleration. ### Limit Duration and Frequency of Acceleration It takes time for the mismatch between our vision and vestibular systems to trigger discomfort. If accelerations are brief and infrequent, side effects can be reduced. Quick turns and position warp are based on this strategy. ### Quantized Velocity Sometimes referred to as stepped velocity or fixed velocity is about limiting virtual movement to a few specific speeds. When the velocity changes instantaneously between specific speeds, there is only a momentary visual indication of acceleration, which minimizes the amount of visual-vestibular mismatch that the user experiences. Providing stopped, walking, and running speeds is generally sufficient, however, including a very slow speed for sneaking may be useful for some designs. The implementation is usually quite simple, where each speed is associated with a band within the range zero to one of the magnitude of the thumbstick input vector. When switching between speeds, be sure the inputs are well within the new speed band to avoid the chance of input noise causing an oscillation between two adjacent speeds. Alternatively, different speeds could be triggered by combining thumbstick input with a button press, such as holding down a button or clicking in the thumbstick while tilting it to sprint in the corresponding direction all to ensure a distinct separation between different velocities. ### Stepped Translations It’s possible to eliminate acceleration entirely by performing movement as a series of rapid teleports, using the same rules for \[thumbstick direction mapping\](/(/resources/tificial-locomotion-controls/#direction-mapping) that are traditionally used to control the direction of movement. With this design, as long as the thumbstick is pushed, changes in position will occur as a series of small teleports with a short time interval between them. Stepped translations are the translational equivalent to snap turns in rotation, and carry the same caveats and considerations with regards to usability and comfort. It can be difficult to maintain precise control of movement if the teleports always move people a constant distance. More control will be provided if the teleport distance is based on how far the thumbstick has been pushed. Timing is an important element of this technique. If the thumbstick is pushed all the way out, the movement can occur immediately. If the thumbstick is only pushed a moderate amount, the input logic will need to detect that the thumbstick has been stationary for a small duration before performing a shorter movement. Like snap turns, stepped translations serve to disrupt your brain’s perception of motion. You should ensure that the user cannot make eight or more small teleports within one second. At that frequency and above, users will once again perceive motion, which can lead to vection and discomfort. This technique can also be implemented as a simpler version of the [Projected Avatar technique](https://developer.oculus.com/resources/locomotion-design-turns-teleportation/#projected-avatars), where the avatar is moving continuously from the perspective of other players and NPCs while the user moves with a series of small teleports, effectively catching up with the projected avatar. Depending on how frequently the teleports occur, it may be useful to disable the third person view of the avatar for the controlling player as they navigate the small distance to the new location. ### Limited Axes of Movement Many users report a particular sensitivity to motion and acceleration along specific axes, such as lateral movement, also known as strafing, and yaw rotation. Limiting the axes of movement driven by the thumbstick can prevent vection along these problematic dimensions. The most restrictive approach is to limit thumbstick input to forward and backward movement relative to the direction the user is facing, which requires people to physically rotate their head and possibly their body in order to move in another direction. If using this control scheme, be sure to consider the range of motion needed for your experience, which can create issues for accessibility and space requirements. Limiting the axes of movement to multiples of 15, 30, 45, 90, or 180 degrees can provide the flexibility necessary to suit individual preferences while making motion more regular and predictable. It is important to keep these restrictions in mind when designing your environment and interactions. If limiting the axes of movement forces the user to frequently take suboptimal paths through the environment, make additional corrective movements, or otherwise experience more vection and acceleration than they would experience with more precise movement controls, you might end up negatively impacting the comfort and usability of your experience instead of improving them. ### Limit Camera Elevation Changes Because the camera usually maintains a fixed distance above the ground, uneven slopes can trigger vertical acceleration just by walking across them. A similar effect will occur just by leaning forward when the ground has a slope, steps, or ledge. It can be very uncomfortable to lean forward in these scenarios, because our head will physically move slightly closer to the ground, but the virtual camera will actually rise up because the ground is slightly higher in the locations below the headset. To reduce the side effects for both walking and leaning, modify the camera logic to ensure the perspective remains at a constant elevation. This is recommended even as the ground undulates below the user as they traverse the terrain, until a limit has been reached and the elevation is updated to reflect the elevation at that moment. This leads to occasional vertical repositioning of the camera. The repositioning can be performed with a teleport, which eliminates the vection; however, it does introduce a small amount of visual discontinuity. An alternative is to quickly warp the camera to the new elevation, similar to how artificial crouching should behave. When camera movement is due to physical locomotion, it can be useful to delay the artificial vertical movement of the camera until artificial locomotion is triggered, or until larger and separately tunable distance limits have been met. This approach will allow people to move in their playspace, lean over objects, and explore without triggering the vertical motion quite as often. [Cyan](https://www.oculus.com/lynx/?u=https%3A%2F%2Fcyan.com%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg), developers of [Myst](https://www.oculus.com/experiences/quest/2719294624823942/?locale=en_US), have described this technique in their [Dynamic Slope Quantization for VR Comfort](https://www.oculus.com/lynx/?u=https%3A%2F%2Fcyan.com%2Fdynamic-slope-quantization%2F&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) blog. It is useful to enable your audience to adjust settings that control how much movement, both artificial and physical, can occur, before the camera will be artificially moved to the correct elevation for the current position. This will allow people to disable the feature by setting these values to zero if they prefer, and/or to tune the maximum travel distances for each type of movement. ### Soft Camera Collisions When your design requires the camera to collide with the environment, this can lead to confusing or disorienting experiences. For example, if a user attempts to physically move to a point inside a virtual wall, the collision will prevent the virtual camera from moving through the wall. The user then perceives that the wall is moving away from the camera as the user approaches it, possibly leading to confusion or discomfort. This can be particularly jarring if the collision is unexpected, which can easily happen in many irregular environments This effect can be somewhat reduced by changing the collision response from a hard collision, which is an instant reaction that prevents the camera from moving through objects, to a soft collision. A soft collision is a more gradual reaction that will still prevent the camera from passing through objects, but instead of instantly stopping when it is very close to the wall, it will slow the camera down over a short distance before it reaches the point where it cannot move any further. Although this does introduce a sensory mismatch that would not take place with a hard collision, the effect is less jarring than an instantaneous stop. As a result, the user can more easily learn to detect and predict the onset of the soft collisions and self-correct their movement to avoid them in the future. [Ubisoft’s Space Junkies](https://www.oculus.com/experiences/rift/1340764792673326/) demonstrates this technique very well. Reduce Optic Flow ================= Optic flow refers to the coherent motion of visual features such as edges, textures, and colors in your environment across your retina that signal to your brain that you are moving through that environment. As an example, seeing a field of stars expanding radially outward from the center of your field of view can tell your brain that you are moving forward through space. The amount of vection you experience, which is the potential reason for discomfort, is correlated with the amount and speed of optic flow you see in VR. The techniques in this section offer suggestions for reducing optic flow to improve user comfort. ### Use Realistic Walking and Running Speeds While it is common for console games to have avatars that walk and run at unrealistically high speeds, the resulting optic flow is more intense than what we are accustomed to experiencing in real life. The average human being walks at a rate of about three miles per hour (1.4 meters per second), and runs at about twice that speed. Keep this in mind when designing your environments. ### Vignettes Vignettes, sometimes referred to as Tunnel Vision, darken or completely occlude the edges of the screen when movement occurs. They serve to limit the amount of visible optic flow, which can help reduce vection during acceleration. The tradeoff with vignettes is that a diminished field of view can potentially be disorienting or claustrophobic for the user. Simple implementations can also feel mechanical or intrusive. More advanced vignetting systems analyze the scene geometry to selectively occlude sections of the visual scene that contain the most optic flow information. For example, if you’re moving forward with a textured wall to your left and open space to your right, the vignette would occlude the part of your visual field containing the wall where there are strong optic flow cues, but leave more of the open space visible because it generates less motion on the retina. [Ubisoft’s Eagle Flight](https://www.oculus.com/experiences/rift/1003772722992854/) demonstrates this technique masterfully. ### Occlusion of Surroundings Some designs may allow for geometry that occludes some optic flow in the environment or in regions of the user’s visual field. For example, vehicle cabins, cockpits, helmets, headgear, etc. can help the user maintain a feeling of immersion while also relegating visible optic flow to windows or apertures. This provides the benefits of an aggressively tuned Vignette effect where much of the optic flow is obscured, while still enabling people to see and interact with the virtual environment created by the occluding geometry. This technique affords creative opportunities for immersion and discomfort mitigation, as well. For example, [DiRT Rally](https://www.oculus.com/experiences/rift/992124094188963) places a stabilized camera inside the driver’s seat of a race car. To indicate the car is going over bumpy ground, the car geometry jumps and shakes around the user, but the camera maintains a stable view of the outside environment that is consistent with how the user’s head is (not) moving. This creates less visual-vestibular mismatch than if the camera shook along with the car, while still conveying information to the user about their movement on the virtual terrain. ### Temporal Occlusion The purpose of temporal occlusion is to briefly obscure part of the display so that there is a reduction of optic flow. The effect is visible for only a short amount of time so that it doesn’t interfere with our awareness of the environment. There are a number of ways to apply temporal occlusion, for example, as a dynamic pattern on the edge of the vignette or as ribbons of solid color that appear across the entire display, quickly disappearing when rapid movements occur. [Eagle Flight](https://www.oculus.com/experiences/rift/1003772722992854/) demonstrates both of these effects very well. ### Peripheral Vision Occlusion In some designs, it may be possible to have geometry that becomes opaque depending on how far from the center of the field of view it is at any given moment. This can result in significant reduction in optic flow when detailed environments are hidden behind relatively solid surfaces. This is particularly effective in vehicles where most of the optic flow takes place in the windows which display the environment moving past the user. In this example, if the window you are currently viewing is always transparent, then you can always see what you want to see, even if the rest of the vehicle windows are opaque. This provides the benefits of an aggressively tuned vignette effect, where much of the optic flow is eliminated, while still allowing people to see and interact with the interior of the vehicle. [Ultrawings](https://www.oculus.com/experiences/quest/1798409083604479) demonstrates this technique effectively. ### Reduced Texture Detail The more details that are visible on the surfaces of objects, the more optic flow that will occur when a user moves past them. Reduce optic flow with art styles that make heavy use of more solid textures, and minimize the number of visible edges and noisy textures. Advanced techniques include leveraging pixel shaders to reduce texture detail for surfaces in relation to their speed of movement as projected on the screen. Environmental Considerations ============================ The environment can present any number of situations that lead to moments of acceleration and vection. This guide discusses a number of considerations you may want to take into account as you plan your environment, so you can reduce vection as much as possible. ### Minimize movement on slopes When walking across virtual terrain with either physical or artificial locomotion, the avatar will typically move up and down to follow the surface due to gravity. This can lead to vertical accelerations and vection as the angle of the walking surface varies. When possible, it is generally advisable to limit movement to flat terrain and avoid the use of slopes and stairs in the accessible environment. ### Design for Forward Movement People are most comfortable when they are moving forward. With forward motion, parallax causes points nearest the direction of movement to expand outward more slowly across the retina than points in the periphery. Strafing, back-stepping, spinning and continuous turning movements will often trigger discomfort. Spiral staircases are a good example of something to avoid. When the design allows, plan the environment to minimize the need for extraneous movement. ### Keep Walls at a Distance A textured surface like a wall or ground plane creates more optic flow for the user the closer they walk next to it. When possible, keep the user as far away as possible from large structures and walls without compromising the design. Utilizing open spaces, or large rooms with barriers that prevent getting close to walls can dramatically reduce optic flow as people move through your environment. ### Elevator and Stair Design Elevators and stairs have the potential to fill the field of view with strong visual motion cues, such as the vertical cascade of horizontal edges created by the steps on a staircase. With elevators, stairs, and other situations where the environment is moving perpendicular to the camera direction, points across the entire field of view are all moving at roughly the same velocity. This creates an even stronger trigger for vection, and is therefore particularly likely to trigger discomfort. The same is true for continuous rotation of the camera viewpoint. If elevators are necessary, avoid visual design elements that will produce effects that pass the user with each floor such as lights and other details. For stairs, use gentle slopes to maximize the distance from the stairs’ geometry or texture to the camera. Minimize the number of steps to reduce how long people need to be on them, and reduce texture detail as much as possible to limit optic flow. ### Stair & Slope Teleports Even if most of the environment supports free movement through the world, it can be helpful to provide teleport nodes at the top and bottom of stairways. This will allow people who usually prefer to navigate with thumbsticks or other forms of continuous movement the option to avoid the increased comfort risk of the occasional set of stairs. If the design already needs to use short-range teleports to deal with ladders and gaps, adding support for slopes and stairs can be an extension of the same mechanic. Sensory Reinforcement ===================== Discomfort is often triggered by inconsistencies between sensory inputs such as vision, proprioception, and vestibular systems. The techniques outlined in this guide are helpful in reducing inconsistencies and mismatches between senses, and for making the user experience as comfortable as possible. ### Consistent Frame Rate and Head Tracking A consistent frame rate makes it possible for the camera perspective to reliably match the player’s physical pose. Seeing the view update reliably and with low latency when looking in any given direction is absolutely necessary for an application to be comfortable. If the frame rate is not consistent, the player will experience judder. This is when the virtual camera position doesn’t match the physical camera position. In practice, this means a previously rendered frame that was valid at some time in the past continues to be visible to the player, despite the continued physical movement of the HMD. Judder is uncomfortable to experience, and should be prevented as much as possible. Because maintaining a solid framerate is such an important factor when maximizing comfort in VR, the Oculus system services provides a reprojection feature called [Asynchronous Time Warp](https://developer.oculus.com/documentation/native/android/mobile-timewarp-overview/) (ATW) which reduces the effect of judder when the application doesn’t submit frames fast enough to keep up with the display refresh rate. This technique displays the previously submitted frame and redraws it on the current frame, so that it tracks the current head position. Reprojection is not nearly as effective as maintaining consistent framerate in the first place, so it is important to optimize the application until your app runs at framerate. ### Independent Visual Backgrounds An independent visual background (IVB) can reduce discomfort by helping the brain reinterpret the visual information it is receiving from VR, and in some cases by reducing the amount of visible optic flow. Usually, when your brain sees as much coherent, correlated motion in your visual field as it does while using virtual locomotion, it is because you are the one moving through a stable world. We rarely, if ever, face situations in the real world where we are stable, and large portions of our surroundings are moving around us. However, if you can give your brain sufficient visual evidence that this is the case, you will no longer perceive yourself as moving. Consequently, your vestibular and visual senses will better agree, which in turn will reduce discomfort. As one might expect, it is no straightforward task to convince your brain that the visual motion of virtual locomotion is the result of the world moving around you rather than you moving through the world. An independent visual background puts geometry or imagery in your environment that is consistent with what your vestibular organs are sensing. A simple example would be a visually rich skybox that, instead of responding to thumbstick input, only responds to head movement. Say, you put your users facing north in a grassy field with a cloud-filled sky an infinite distance away. If they use a virtual locomotion to walk around the field, no matter what thumbstick input they use, they would be looking at the northern sky. However, if they turn their head to the left and right, they would see the western and eastern skies, respectively. In essence, the IVB is like putting the user in a giant stable room. They can physically look around and move through the IVB if they turn their heads and move their bodies. When they see some visual motion as a result of virtual locomotion, the user’s brain can then plausibly interpret that motion as occurring around them while they are staying still in their stable IVB. In the past several years of VR, there have been a few notable examples of IVBs. In the game [Vanguard V](https://www.oculus.com/experiences/rift/1530332686985316/), the user is always flying forward through space. The skybox shows an outer space environment such as star fields and planets that are far enough away that you would not see any motion parallax from the movements you and your avatar make. Any perception of movement through space is communicated through the way relatively nearby objects pass by and visual cues like motion lines. [Brass Tactics](https://www.oculus.com/experiences/rift/1101975213197949/) also uses an IVB to help make moving through a virtual miniature battlefield more comfortable. The game takes place in a castle war hall where battles unfold on a giant table in front of the player. Although the player can grab the tabletop and move it around to traverse the battlefield, the castle environment behaves like a normal room. This helps your brain form an interpretation of the visual image that is consistent with your vestibular sense: that you, the player, are sitting inside a room, and the movement of the battlefield in front of you is the tabletop getting moved around inside this war hall. Independent visual backgrounds can be effective, but their unique behavior does not lend itself to easy implementation in just any VR app. In the previous examples, the user is in a very open environment that can remain stable, and has an in-universe explanation for its behavior. An IVB may be impossible to show the player in a small corridor, and without a story or explanation for why the skybox IVB is not responding to thumbstick input, the experience might seem confusing or even broken. Still, some developers are exploring creative ways to create a more general-use IVB so it might benefit a wider variety of VR experiences. For example, [Google Earth VR](https://www.oculus.com/experiences/rift/1513995308673845/) combines the IVB with a vignette/occlusion effect. Among their options for comfort, the user can use a setting where any time they move around the virtual Earth, the periphery of their field of view—instead of simply being darkened out or obscured—is replaced with a view of a simple, static environment consisting of a grid-lined floor plane and a skybox. The net effect is that the user’s brain can form the perception that they are standing in the stable VR environment, and the motion of the virtual Earth is simply occurring inside a portal or window in front of them. Digital Lode’s [Espire](https://www.oculus.com/experiences/quest/2228678273856228/) does something similar whenever the player uses artificial locomotion, fading in a translucent, 3D grid surrounding the center of your field of view. ** IVBs are easily confused with a simple cockpit, heads-up display (HUD), or other form of vignetting or occlusion. While IVBs can similarly serve to reduce the amount of visible optic flow on the screen, there are some key differences. As a general rule, they must afford to the user the interpretation that the on-screen motion is coming from the world/scenery moving around them rather than them moving through the world. A cockpit or HUD do little to foster this percept: the user still perceives themselves as moving, just while in a vehicle or while wearing a virtual HMD. We do not yet know all the factors that can contribute to this interpretation, but the past several years of design experimentation have revealed some important observations. First, as one may expect, the amount of visual field occupied by optic flow vs. the IVB matters; users have to see the IVB for their brains to use it. At some point (which will vary by context and by user), the visual information signaling an IVB becomes insufficient, and vection can once again prevail as the conscious percept. The perceived depth relationship between the IVB and the main foreground content can play a role as well, as defined by binocular disparity and occlusion depth cues. An IVB that is further away from the user than the part of the environment signalling optic flow better lends itself to the interpretation that the user is stationary in a stable room or environment in which objects are moving around them. If depth cues make the user perceive the IVB between themselves and the optic flow, it can be interpreted as a HUD or cockpit, which is less effective. There is still a lot to learn about what makes an effective vs. ineffective IVB, so we encourage design experimentation and user testing to see if an IVB can fit and improve the comfort in your VR experience. ### Simulated Activities Many developers and users find comfort benefits from controlling artificial locomotion through the re-enactment of physical activities, such as walking in place or pulling on the rungs of a ladder to climb (as opposed to using thumbstick or button inputs to accomplish the same experience). Of course, forcing the user to engage in approximations of physical activities creates risk for fatigue and accessibility issues if users do not have an alternative movement scheme they can use, so consider your options wisely. There are many possible reasons for why physically simulating these activities might work to improve comfort. For simulated walking, it has been theorized that the proprioceptive and vestibular input might better align with the visual motion, reducing sensory conflict. Another possibility is that such sensory inputs introduce noise in your perceptual system that makes it relatively ambiguous for your brain as to whether you are actually walking through space or not. Mechanical vibrators that vibrate against the head to stimulate the vestibular organs operate on a similar principle. The effectiveness of such methods vary from person to person, based on how they happen to interpret and perceive all the sensory input. In the case of climbing a ladder or world pulling, the user has more granular and predictable control of environmental movement than a simple thumbstick press. The act of grabbing and moving the environment with one’s hands and arms creates a strong sensorimotor feedback loop where the coherent motion the user sees is causally linked to the hand movements in a more tangible way than simply tilting a thumbstick. It also has the possibility of creating an alternative interpretation of your actions moving the environment geometry while you stay still, similar to an independent visual background. Games such as Crytek’s [The Climb](https://www.oculus.com/experiences/rift/866068943510454/) and Ready At Dawn’s [Lone Echo](https://www.oculus.com/experiences/rift/1368187813209608/) use the world pulling technique to great effect. ### Spatial Sound Effects Environmental sound effects can be helpful for reducing disorientation when implementing a blink effect, or any others that occlude the environment. Imagine teleporting towards a loud vehicle that is on the other side of a ringing alarm. If the position of the listener changes with a blink effect, they will hear the alarm pass by, along with the sound of the approaching vehicle during the short time the screen is dark. This can help users orient themselves in the environment. Check out the [VR Audio Best Practice Guide](https://developer.oculus.com/resources/bp-audio/) for more on spatial sound design for your VR app.