# Oculus Best Practices [Oculus Best Practices PDF](https://scontent.oculuscdn.com/v/t64.5771-25/12482206_237917063479780_486464407014998016_n.pdf?_nc_cat=105&ccb=1-5&_nc_sid=489e6e&_nc_ohc=eOjO2QwVUDoAX9a8jcG&_nc_ht=scontent.oculuscdn.com&oh=ac8f09ebaacac9afa1f61da75bc42b9c&oe=61B25752) Page 1 | [Page 2](https://hackmd.io/@arpad-gabor/vr-page-2) | [Page 3](https://hackmd.io/@arpad-gabor/vr-page-3) Vision ====== This section offers tips about how to display the virtual world you’re creating to your users and is provided with a bit more explanation due to its complexity. Using Monocular Depth Cues -------------------------- Failing to properly represent the depth of objects will break a VR experience. Stereopsis, the perception of depth based on disparity between the viewpoints of each eye, is the most salient depth cue, but it is only one of many ways the brain processes depth information. Many visual depth cues are monocular; that is, they convey depth even when they are viewed by only one eye or appear in a flat image viewed by both eyes. One such depth cue is motion parallax, or the degree to which objects and different distances appear to move at different rates during head movement. Other depth cues include: curvilinear perspective (straight lines converge as they extend into the distance), relative scale (objects get smaller when they are farther away), occlusion (closer objects block our view of more distant objects), aerial perspective (distant objects appear fainter than close objects due to the refractive properties of the atmosphere), texture gradients (repeating patterns get more densely packed as they recede into the distance) and lighting (highlights and shadows help us perceive the shape and position of objects). Current-generation computer-generated content (such as content created in Unreal and Unity) leverage a lot of these depth cues, but we mention them because it can be easy to neglect their importance. If implemented improperly, the experience may become uncomfortable or difficult to view as a result of conflicting depth signals. Comfortable Viewing Distances ----------------------------- Two issues are of primary importance to understanding eye comfort when the eyes are focusing on an object in VR: accommodative demand and vergence demand. Accommodative demand refers to how your eyes have to adjust the shape of their lenses to bring a depth plane into focus (a process known as accommodation). Vergence demand refers to the degree to which the eyes have to rotate inwards so their lines of sight intersect at a particular depth plane. In the real world, these two are strongly correlated with one another; so much so that we have what is known as the accommodation-convergence reflex: the degree of convergence of your eyes influences the accommodation of your lenses, and vice-versa. VR creates an unusual situation that decouples accommodative and vergence demands, where accommodative demand is fixed but vergence demand can change. This is because the actual images for creating stereoscopic 3D are always presented on a screen that remains at the same distance optically, but the different images presented to each eye still require the eyes to rotate so their lines of sight converge on objects at a variety of different depth planes. The degree to which the accommodative and vergence demands can differ before the experience becomes uncomfortable to the viewer varies. In order to prevent eyestrain, objects that you know the user will be fixating their eyes on for an extended period of time (e.g., a menu, an object of interest in the environment) should be rendered at least 0.5 meters away. Many have found that 1 meter is a comfortable distance for menus and GUIs that users may focus on for extended periods of time. Obviously, a complete virtual environment requires rendering some objects outside the comfortable range. As long as users are not required to focus on those objects for extended periods, they should not cause discomfort for most individuals. Some developers have found that depth-of-field effects can be both immersive and comfortable for situations in which you know where the user is looking. For example, you might artificially blur the background behind a menu the user brings up, or blur objects that fall outside the depth plane of an object being held up for examination. This not only simulates the natural functioning of your vision in the real world, it can prevent distracting the eyes with salient objects outside the user’s focus. You have no control over a user who chooses to behave in an unreasonable manner. A user may choose to stand with their eyes inches away from an object and stare at it all day. Your responsibility is to avoid requiring scenarios that may cause discomfort. Viewing Objects at a Distance ----------------------------- At a certain distance depth perception becomes less sensitive. Up close, stereopsis might allow you to tell which of two objects on your desk is closer on the scale of millimeters. This becomes more difficult further out. If you look at two trees on the opposite side of a park, they might have to be meters apart before you can confidently tell which is closer or farther away. At even larger scales, you might have trouble telling which of two mountains in a mountain range is closer to you until the difference reaches kilometers. Use this relative insensitivity to depth perception in the distance to free up computational power by using imposter or billboard textures in place of fully 3D scenery. For instance, rather than rendering a distant hill in 3D, you might simply render a flat image of the hill onto a single polygon that appears in the left and right eye images. This image appears to the eyes in VR the same as in traditional 3D games. Note: The effectiveness of these impostors will vary depending on the size of the objects involved, the depth cues inside of and around those objects, and the context in which they appear. You will need to engage in individual testing with your assets to ensure the impostors look and feel right. Be sure that the impostors are sufficiently distant from the camera to blend in inconspicuously, and that interfaces between real and impostor scene elements do not break immersion. Rendering Stereoscopic Images ----------------------------- We often face situations in the real world where each eye gets a very different viewpoint, and we generally have little problem with it. Peeking around a corner with one eye works in VR just as well as it does in real life. In fact, the eyes’ different viewpoints can be beneficial: say you’re a special agent (in real life or VR) trying to stay hidden in some tall grass. Your eyes’ different viewpoints allow you to look “through” the grass to monitor your surroundings as if the grass weren’t even there in front of you. Doing the same in a video game on a 2D screen may leave the world completely occluded behind each blade of grass. Still, VR (like any other stereoscopic imagery) can give rise to some situations that can be uncomfortable for the user. For instance, rendering effects (such as light distortion, particle effects, or light bloom) should always appear in both eyes and with correct disparity. Improperly rendering these effects gives the appearance of flickering/shimmering (when something appears only in one eye) or floating at the wrong depth (if disparity is off, or if the post-processing effect is not rendered to contextual depth of the object it should be; for example, a specular shading pass). It is important to ensure that the images between the two eyes do not differ aside from the slightly different viewing positions inherent to binocular disparity. It’s typically not a problem in a complex 3D environment, but be sure to give the user’s brain enough information to fuse the stereoscopic images together. The lines and edges that make up a 3D scene are generally sufficient. However, be very cautious of using wide swaths of repeating patterns or textures, which could cause people to fuse the images differently than intended. Be aware also that optical illusions of depth (such as the [hollow mask illusion](https://www.oculus.com/lynx/?u=https%3A%2F%2Fen.wikipedia.org%2Fwiki%2FHollow-Face_illusion&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg), where concave surfaces appear convex) can sometimes lead to misperceptions, particularly in situations where monocular depth cues are sparse. Displaying Information in VR ---------------------------- We discourage the use of traditional HUDs to display information in VR. Instead, embed the information into the environment or the user’s avatar. Although certain traditional conventions can work with thoughtful re-design, simply porting over the HUD from a non-VR game into VR content introduces new issues that make them impractical or even discomforting. Should you choose to incorporate some HUD elements, be aware of the following issues. 1. Don’t occlude the scene with the HUD. This isn’t a problem in non-stereoscopic games, because the user can easily assume that the HUD actually is in front of everything else. Adding binocular disparity (the slight differences between the images projected to each eye) as a depth cue can create a contradiction if a scene element comes closer to the user than the depth plane of the HUD. Based on occlusion, the HUD is perceived to be closer than the scene element because it covers everything behind it, yet binocular disparity indicates that the HUD is farther away than the scene element it occludes. This can lead to difficulty and/or discomfort when trying to fuse the images for either the HUD or the environment. 2. Don’t draw the elements “behind” anything in the scene. This effect is extremely common with reticles, subtitles, and other sorts of floating UI elements. It’s common for an object that should be “behind” a wall (in terms of distance from the camera) to be drawn “in front” of the wall because it’s been implemented as an overlay. This sends conflicting cues about the depth of these objects, which can be uncomfortable. ![](https://lh5.googleusercontent.com/v3ZstlhGSkwGVEoVvih6LZ1nhFwYZCjMZIZjHWgvlhDuYjhvK0iU0c2Tvc36GRbM5_6YWCaLWWjwJLWNVAa3mhX1EQsBjPMfVe0r8mpcQl5v-893ZHfgOnDo-yu80iJkU7ychjtI) Instead, we recommend that you build the information into the environment. Users can move their heads to retrieve information in an intuitive way. For instance, rather than including mini map and compass in a HUD, the player might get their bearings by glancing down at an actual map and compass in their avatar’s hands or cockpit or a watch that displays the player’s vital information.. This is not to say realism is necessary, enemy health gauges might float over their heads. What’s important is presenting information in a clear and comfortable way that does not interfere with the player’s ability to perceive a clear, single image of the environment or the information they are trying to gather. Targeting reticles are common elements to games, and are a good example of where we can adapt an old information paradigm to VR. While a reticle is critical for accurate aiming, simply pasting it over the scene at a fixed depth plane will not yield the behavior players expect in a game. For example, if the reticle is rendered at a depth different from where the eyes are converged, it is perceived as a double image. In order for the targeting reticle to work the same way it does in traditional video games, it must be drawn directly onto the object it is targeting on screen, presumably where the user’s eyes are converged when aiming. The reticle itself can be a fixed size that appears bigger or smaller with distance, or you can program it to maintain an absolute size to the user; this is largely an aesthetic decision. Place critical gameplay elements in the user’s immediate line of sight. UI or elements displayed outside the user’s immersive line of sight are more likely to be missed. ![](https://lh4.googleusercontent.com/7zGEoYhwzdhAax5YPjK8cHlZHp8x_q-Qan3c_7l4elDpjp3hzUGgBYTahDPnNQD-DC8aaCUJgcGxTYJpOqPn_nrH-xX3EU2CcZ33bT9EV1sTBCysdvdFIvyrt5aSTzKgwvmGJOOL) http://buildmedia.com/portfolio-items/what-are-survey-accurate-visual-simulations/ Camera Origin & User Perspective -------------------------------- You should consider the altitude of the user, or height of the user’s point of view (POV), as this can be a factor in causing discomfort. The lower the user’s POV, the more rapidly the ground plane changes, creating a more intense display of optic flow. This can create an uncomfortable sensation for the same reason that moving up staircases is uncomfortable: doing so creates an intense optic flow across the visual field. When developing a VR app, you can choose to make the camera’s origin rest on people’s floor or on their eyes (these are called “floor” and “eye” origins, respectively). Both options have certain advantages and disadvantages. Using the floor as the origin will cause people’s viewpoint to be at the same height off the ground that they are in real life. Aligning their virtual viewpoint height with their real-world height can increase the sense of immersion. However, you can’t control how tall people in your virtual worlds are. If you want to render a virtual body, you’ll need to build a solution that can scale to different people’s height. Using the user’s eyes as the camera’s origin means that you can control their height within the virtual world. This is useful for rendering virtual bodies that are a specific height and also for offering perspectives that differ from people’s real-world experience (for example, you can show people what the world looks like from the eyes of a child). However, by using the eye point as the origin, you no longer know where the physical floor is. This complicates interactions that involve ducking low or picking things up from the ground. Since you won’t actually know the user’s height, you may wish to add a recentering step at the beginning of your app to accurately record the user’s real world height. User Orientation and Positional Tracking This section offers best practices about how to track and translate a user’s real-world movements to the virtual world. User orientation and positional tracking only applies to VR devices with 6DOF tracking capabilities, like the Rift. Do not disable or modify position tracking. This is especially important while the user is moving in the real world. Any discernible difference between movement in the real world that does not map to movement in VR, or vice versa, causes a conflict of the senses and is extremely discomforting. Allow users to set their origin point. Users may prefer to orient themselves in the real world a certain way. This may be due to how their room is set up in the real world. Add guidance to your app helping them to position themselves in their preferred orientation. Users may shift or move during gameplay, and therefore should have the ability to reset the origin at any time. Roomscale holds a great deal of potential, it also introduces new challenges. First, users can leave the viewing area of the tracking camera and lose position tracking, which can be a very jarring experience. To maintain an uninterrupted experience, the Oculus Guardian system provides users with warnings as they approach the edges of the camera’s tracking volume before position tracking is lost. They should also receive some form of feedback that will help them better position themselves in front of the camera for tracking. For example, you can display the user’s origin point in the scene to help users position themselves. You could also query and display an outline of the user’s boundary. Proper positional tracking requires people to define a play area in their homes for VR. This area is created in the first time setup for each user and is used with the Guardian system to help protect users. The amount of tracked size space available varies from user to user, making it difficult to know how large to make their virtual spaces. Most people have, on average, four square meters of trackable space available. If that were a square, it would be roughly six feet per side, but many users will not have a perfectly square area. Some people have more space than that available, but a large number of people have less. Design your content and interactions with these tracked space requirements in mind. You should not require interactions that occur outside a user’s Guardian configuration or defined play area. Both roomscale and tracked size space may benefit from querying the player’s play area and rendering the important and interactive scene objects within that area. This allows you to help the player stay within the tracked volume and ensure that everything interactive is within the player’s reach. A challenge unique to VR is that users can now move the virtual camera into unusual positions that might have been previously impossible. For instance, users can move the camera to look under objects or around barriers to see parts of the environment that would be hidden in a conventional video game. While this opens up new methods of interaction, like physically moving to peer around cover or examine objects in the environment, this also allows users to discover technical shortcuts you might have taken in designing the environment that would normally be hidden. Make sure that your art and assets do not break the user’s sense of immersion in the virtual environment. Head-object intersections are another issue that is unique to positionally tracked VR. Users could potentially use position tracking to clip through the virtual environment by leaning through a wall or object. Ensure that your design does not allow a user to get close enough to “solid” objects that they may intersect with causing a discomforting experience. Rendering ========= The rendering section introduces some of the things you should optimize for or avoid when rendering your scene. Use text in UI and scene elements that can be easily read. There are several ways to ensure text legibility in VR. For rendering purposes, we recommend using a signed distance field font in your app. This ensures smooth rendering of the font even when zoomed or shrunk. You should also consider the languages your app supports. The complexity of combined letter combinations may influence the legibility. For instance, your app may want to use a font that supports East Asian languages well. Localization may also affect text layout as some languages employ more letters than others for the same copy. Font size and placement in your scene is important as well. For Gear VR, choosing a font size larger than 30-pt will generally give you minimal legibility at the fixed z-depth of 4.5m (in Unity). Larger than 48-pt will generally ensure a comfortable reading experience. For Rift, a font size larger than 25-pt would give you the minimal legibility at the fixed z-depth of 4.5m (in Unity). Larger than 42-pt would generally ensure comfortable reading experience. Flicker plays a significant role in the oculomotor component of simulator sickness, and is generally perceived as a rapid “pulsing” of lightness and darkness on part or all of a screen. The degree to which a user will perceive flicker is a function of several factors, including: the rate at which the display is cycling between “on” and “off” modes, the amount of light emitted during the “on” phase, how much of which parts of the retina are being stimulated, and even the time of day and fatigue level of the individual. Although flicker can become less consciously noticeable over time, it can still lead to headaches and eyestrain. Some people are extremely sensitive to flicker and experience eyestrain, fatigue, or headaches as a result. Others will never even notice it or have any adverse symptoms. Still, there are certain factors that can increase or decrease the likelihood any given person will perceive display flicker. - First, people are more sensitive to flicker in the periphery than in the center of vision. - Second, brighter screen images produce more flicker. Bright imagery, particularly in the periphery (e.g., standing in a bright, white room) can potentially create noticeable display flicker. Try to use darker colors whenever possible, particularly for areas outside the center of the player’s viewpoint. In general, the higher the refresh rate, the less perceptible flicker is. Do not create purposely flickering content. High-contrast, flashing (or rapidly alternating) stimuli can trigger photosensitive seizures in some people. Related to this point, high-spatial-frequency textures (such as fine black-and-white stripes) can also trigger photosensitive seizures. The International Standards Organization has published [ISO 9241-391:2016](https://www.oculus.com/lynx/?u=https%3A%2F%2Fwww.iso.org%2Fstandard%2F56350.html&e=AT2x_N9oiyauOZBE-MYG_tc4oKPgeGZAQoBagZtFFBGCOUaDPx2pNvJ2HYmSgHSW6cIKi0dyZv2fXxJcefUztrnNmzbZFun-3R3MgdetgxgU0wPzHxxAERBrg5whvhlP8RrHllbcsN3hOQyRWvphZg) as a standard for image content to reduce the risk of photosensitive seizures. The standard addresses potentially harmful flashes and patterns. You must ensure that your content conforms to standards and best practices on image safety. Use parallax mapping instead of normal mapping. Normal mapping provides realistic lighting cues to convey depth and texture without adding to the vertex detail of a given 3D model. Although widely used in modern games, it is much less compelling when viewed in stereoscopic 3D. Because normal mapping does not account for binocular disparity or motion parallax, it produces an image akin to a flat texture painted onto the object model. Parallax mapping builds on the idea of normal mapping, but accounts for depth cues normal mapping does not. Parallax mapping shifts the texture coordinates of the sampled surface texture by using an additional height map provided by the content creator. The texture coordinate shift is applied using the per-pixel or per-vertex view direction calculated at the shader level. Parallax mapping is best utilized on surfaces with fine detail that would not affect the collision surface, such as brick walls or cobblestone pathways. Apply the appropriate distortion correction for the platform you’re developing for. Lenses in VR headsets distort the rendered image; this distortion is corrected by the post-processing steps in the SDKs. It is extremely important that this distortion be done correctly and according to the SDK guidelines. Incorrect distortion can “look” fairly correct, but still feel disorienting and uncomfortable, so attention to the details is critical. All of the distortion correction values need to match the physical device, none of them may be user-adjustable. Latency and Lag --------------- We’ll spend some time discussing the effects latency and lag have on users in VR. We don’t have specific recommendations for fixing these issues as they can have numerous causes. Please review the Mobile [Testing and Troubleshooting](https://developer.oculus.com/documentation/mobilesdk/latest/concepts/book-testing/) and the Rift [Optimizing Your Application](https://developer.oculus.com/documentation/pcsdk/latest/concepts/dg-performance/) guides for information about optimizing your game loop. Although developers have no control over many aspects of system latency (such as display updating rate and hardware latencies), it is important to make sure your VR experience does not lag or drop frames. Many games can slow down as a result of numerous or more complex elements being processed and rendered to the screen. While this is a minor annoyance in traditional video games, it can be extremely uncomfortable for users in VR. We define latency as the total time between movement of the user’s head and the updated image being displayed on the screen (motion-to-photon), and it includes the times for sensor response, fusion, rendering, image transmission, and display response. Past research findings on the effects of latency are somewhat mixed. Many experts recommend minimizing latency to reduce discomfort because lag between head movements and corresponding updates on the display can lead to sensory conflicts and errors in the vestibular-ocular reflex. We therefore encourage minimizing latency as much as possible. It is worth noting that some research with head-mounted displays suggests a fixed latency creates about the same degree of discomfort whether it’s as short as 48 ms or as long as 300 ms; however, variable and unpredictable latencies in cockpit and driving simulators create more discomfort the longer they become on average. This suggests that people can eventually get used to a consistent and predictable bit of lag, but fluctuating, unpredictable lags are increasingly discomforting the longer, on average, they become. We believe the threshold for compelling VR to be at or below 20ms of latency. Above this range, users reported feeling less immersed and comfortable in the environment. When latency exceeds 60ms, the disjunction between one’s head motions and the motions of the virtual world start to feel out of sync, causing discomfort and disorientation. Large latencies are believed to be one of the primary causes of discomfort. Independent of comfort issues, latency can be disruptive to user interactions and presence. In an ideal world the closer you are to 0ms the better. If latency is unavoidable, it will be more uncomfortable the more variable it is. Your goal should be the lowest and least variable latency possible. General User Experience ======================= The general user experience (UX) best practices focus on the basic interaction between users and your VR environment. Allow the user to define their own session duration. VR requires a unique physicality that is absent in other display technologies, as users are wearing a device on their head and are often standing and/or moving their bodies. Developers should be mindful of the user’s need to take breaks from engaging with their content. Users should always have the freedom to suspend their game, then return to the exact point where they left off at their leisure. Well-timed suggestions to take a break, such as at save points or breaks in the action, are also a good reminder for users who might otherwise lose track of time. Incorporating resting positions into your VR experience may help minimize the fatigue users may feel during extended gameplay sessions. The more time users accumulate in VR, the less likely they are to experience discomfort. This learned comfort is a result of the brain learning to reinterpret visual anomalies that previously induced discomfort, and user movements become more stable and efficient to reduce vection. We’ve discussed vection in detail in the [Locomotion](https://developer.oculus.com/resources/bp-locomotion/) section of this guide. 1. Developers who test their own games repeatedly are often more resistant to discomfort than new users. We strongly recommend always testing apps with a novice population with a variety of susceptibility levels to VR discomfort to assess how comfortable the experience will be for a wide audience of users. 2. Users should not be thrown immediately into intense game experiences. Start with more sedate, slower-paced interactions that ease them into the game. 3. Apps that contain intense virtual experiences should provide users with warning of the content in the game so they may approach it as they feel most comfortable. Optimize your application for short load times. Loading screens or interstitials in VR may be unavoidable, but users should experience them for as brief an amount of time as possible. Unlike traditional games or applications where users can do something else during a loading screen, such as check their phone or get a drink of water, VR users are captive to your loading experience. When you do have to show a loading screen, we recommend loading a 3D overlay cubemap. This provides a better experience for a user during the loading process with a minimal amount of development required. User Input ========== This section offers information about how users should interact with the virtual world. Every Oculus VR device is accompanied by a default controller or input method. We strongly recommend designing your VR application or experience to be designed to use these input devices. There are two categories that ship with our VR devices, the controllers that ship with the mobile devices, like Gear VR and Go, and those that ship with positional tracking, like Rift. Controllers for these platforms have different capabilities, and therefore different best practices. Mobile device controllers have 3 degree of freedom, while the Rift controller has 6 degree of freedom (commonly referred to as 3DOF and 6DOF). 3DOF controllers allow for orientation tracking of the controller, but do not track the controller’s position in space. Mobile VR devices that use 3DOF controllers only support one controller as the HMD cannot differentiate between multiple controllers. 6DOF controllers support both orientation and positional tracking allowing for a pair of controllers enabling virtual hands that can interact with the VR environment in devices like the Rift. General Recommendations ----------------------- In general, there are some things about user input that you should know when designing your VR experience. Maintain a 1:1 ratio between the movement of the user’s controller in the real world and the movement of the virtual representation. This can be rotational or translational movement in space. If you choose to exaggerate the user’s movement in VR, make it so exaggerated (e.g. 4x) that it is readily obvious that it is not a natural sensory experience. Use the standard button mapping for your application. Experienced VR users are accustomed to certain buttons or movements performing certain actions. Continuing these mappings makes your application feel familiar, even to first time users. Our [Button Mapping Tech Note](https://developer.oculus.com/blog/tech-note-touch-button-mapping-best-practices/) details these mappings. Menus should be touch or controller input based, not gazed based. This is a change from the early iterations of Oculus hardware but offers a more engaging and interactive experience. VR users are both left and right handed. Accommodate both sets of users by allowing any interaction to be done with either hand if 2 controllers are present, or respect the default hand set by the system if the device supports a single controller. 3DOF Controller Best Practices ------------------------------ The recommendations and best practices in this section are specific to 3DOF controllers, like the Gear VR Controller. If you choose to represent the controller in-app, we do not recommend representing the controller as a hand, or hand-like object, implying that the controller can be used to grab or manipulate objects. That would be difficult to accomplish with the capabilities of the controller. 3DOF controllers work well as a pointing devices when pointing at UI elements. Generally, users find it intuitive to point and select with these controllers. We recommend rendering the controller in the scene and drawing a laser shooting out of the end of the controller (either extending all the way to the UI or fading after a few inches), and rendering a cursor over your UI at the intersection of the ray cast from the controller. The Oculus Home app provides an example of the interaction you could mimic in your app. If you have a menu that can be paged through, or smooth scrolled through, use the controller touchpad. Swiping across the touchpad should move your UI in the same direction as the swipe. For example, if you have a long vertical page, swiping up should move the page upwards to let you see more at the bottom of the page. Allow the user to select with either the trigger or touchpad click, unless you need to reserve one of those for another action. When using the controller to aim at objects in VR (e.g., for a shooter), we recommend that you leave the ray-cast pointer on at all times. Your ray-cast origin should emanate from the end of the virtual controller rendered in your application. To differentiate between objects that are and are not interactive in the virtual environment, render the ray-cast at partial opacity unless it is pointed at an interactive object where it becomes full opacity. It is usually a good idea to include some form of visual cue when interacting with an object in VR, such as highlighting or making the object slightly larger on hover. This provides the user two pieces of information; first that the object is interactive, and second that the object will be selected if an action is taken. One of the other common uses of the controller is to control a vehicle in VR. There are many ways to accomplish this. We’ve found the one-handed tilt approach to be comfortable and intuitive: tilt left to steer left, tilt right to steer right. This method makes the trigger and touchpad available for use during vehicle control. 6DOF Controller Best Practices ------------------------------ The recommendations and best practices in this section are specific to 6DOF controllers, like the Touch controllers. Hands Good hand registration is worth the development time. Touch controllers give you access to hands in VR, not just implements that you can hold, but actual hands. When done properly, virtual hands let you interact with the virtual world intuitively; after all, you already know how to use your hands. When implemented poorly, virtual hands can cause an “uncanny hand valley” effect. In extreme cases, poor hand registration and tracking can cause a proprioceptive mismatch and make the user feel uncomfortable. Getting virtual hands right means you need good hand registration. Registration occurs when the brain sees virtual hands and accepts them as a representation of the physical hands. For this to happen, a number of requirements need to be met. Most importantly, the hand position and orientation needs to match the user’s actual hand. It’s common for a slight offset of rotational error in the hand models to lead to poor registration. To get registration right, one method is to put a controller model in the scene and ensure its pivot is correct by peeking out from the bottom of the HMD as you move the controller near your face. Properly implemented, you should see the controller pass from the real world into the virtual world seamlessly. From there, the next step is to model hands around the controller and then hide the controller model. An easy way to test registration is to use the “back of the hand” test. Run your index finger along the back of your other hand in VR and look to see if the touch you’re feeling aligns with the positions of your hand graphics. Poorly aligned hands will often be wildly off, but virtual hands with good registration should match closely. We’ve published a detailed look at [Implementing Quality Hands With Oculus Touch](https://developer.oculus.com/blog/implementing-quality-hands-with-oculus-touch/) on our developer blog, including an overview of some of the tools that Oculus provides. Fleshy, realistic hand models can make users feel uncomfortable if the model does not match their real-world hands. Allow users to customize the appearance of their hands, making them feel more (or less if that is their preference) real. While large hands or attaching other objects to the hand is usually accepted by the brain (people can assume their hands are “inside” those objects), hands that are too small can be disconcerting. Using semi-transparent ethereal or robot hands is generally successful because they believably map to a wide range of user regardless of gender, age, race, or ethnicity. Hand-object intersection should not cause a user’s virtual hands to stop tracking in VR. You can’t prevent people from putting their hands through virtual geometry, trying to prevent this with collision detection makes it feel like hand tracking has been lost and is uncomfortable. If you choose to use life-like hands, consider using physical hands that collide with world geometry, but then immediately display a second set of ethereal or transparent hands that continue to track with the player’s motion when the life-like hands get stuck or collide with a solid object. This visually indicates that tracking hasn’t been lost, and the ethereal appearance of the hands suggests that they can’t be used to manipulate the world. Avoid hand animations. Hand registration requires that your brain, somewhat, believes that your avatar’s hands are your own. Having the hands animate, or move without any real-world input, can be very uncomfortable. The exception appears to be quick animations that are expected from user input. For example, animating gun recoil when a user fires a virtual gun. Virtual forearms and elbows are difficult to get right. Discomfort can be caused by proprioception, or your brain’s innate sense of where your limbs are located in physical space, even when you aren’t looking at them. This sensory system is so accurate that trying to simulate virtual limbs, like forearms and elbows, can be difficult. You can see virtual limbs, but you can’t feel them (in the absence of haptic feedback) or locate them using proprioception. It’s common for developers to attempt to attach an IK arm to a tracked controller, but this solution often results in discontinuity between the rendered virtual arm position and the person’s real arm, which proprioception can detect. There are titles that have achieved very believable forearms and elbows, but this is a design problem that requires a great deal of patience and attention. Incorrect arms are worse than no arms at all. We generally recommend not drawing anything above the wrist. Interacting with the Virtual World Tracked controllers can give you virtual hands, but they can’t simulate the torque or resistance we feel when manipulating weighty objects. Interactions that involve significant resistance, like lifting a heavy rock or pulling a large lever don’t feel believable. However, lightweight interactions that involve objects for which we don’t expect to feel significant physical resistance like flicking a light switch are readily believable. When designing hand interactions, consider the apparent weight of the objects people are manipulating and look for ways to make the lack of physical resistance believable. Use caution when working with interactions that require two hands. Lifting a heavy box or holding a pitchfork with two hands is likely to feel strange because the rigid constraints we expect between the hands won’t exist in VR. The best way to pick up an object in VR is to grab the object the way it was designed to be held. Nobody expects to pick a gun up by its barrel or a coffee cup by its base. When a person tries to pick up an object that affords gripping in an obvious way, you should snap the object into their hand at the correct alignment. They reach for the gun, close their hand, and come away with the gun held perfectly, just as they expected. We don’t recommend objects that require shifting of grip, like ping pong paddles. The process of changing hands can feel unnatural. When picking up and gripping an object, we recommend using the grip button on the Touch controller. We recommend that you avoid having users pick up objects off the floor. Sensors are frequently positioned on a desk and may be occluded if the user bends down to pick up an object below the elevation of the desk. Distance grabbing is an effective way to allow users to interact with objects outside of their normal reach. See the [Distance Grab Sample Now Available in Oculus Unity Sample Framework](https://developer.oculus.com/blog/distance-grab-sample-now-available-in-oculus-unity-sample-framework/) tech note for information about this concept and how to implement this method. Some objects don’t have an obvious handle or grip (e.g. a soccer ball) and should attach to the hand at the moment that the grip trigger is depressed. In this case, the offset and orientation of the object to the hand is arbitrary, but as long as the object sticks to the hand it will feel believable. You shouldn’t snap or correct the object in this case—just stick it to the hand at whatever positional offset it was at when the grip was invoked. Throwing objects reliably with tracked controllers is harder than it looks. Different objects afford throwing in different ways. For example, a frisbee is thrown using a completely different motion than the way a paper airplane is thrown. Making both of these actions believable requires building per-object physics rules to govern throwing. When designing your control scheme for throwing objects, use caution if you use the grip button for hold/release. Consider using the trigger button for throw interactions that require force. Users have been known to throw the Touch controller in the real world if the grip button is used. Use haptics to indicate when a user has interacted with an object. This simple addition makes the interaction with objects more believable. ### [**👉 VR Locomotion**](https://hackmd.io/@arpad-gabor/vr-locomotion)