Avatars
=======
[Page 1](https://hackmd.io/@arpad-gabor/vr-page-1) | [Page 2](https://hackmd.io/@arpad-gabor/vr-page-3) | Page 3
Avatars are representations of a user’s body in VR. These best practices will help comfortably display a user’s avatar in VR.
User avatars typically correspond to the user’s position, movement and gestures. The user can see their own virtual body and observe how other users see and interact with them. VR is often a first person experience where use of an avatar is unnecessary; the user is simply disembodied in virtual space.
Avatars can give users a strong sense of scale and of their body’s volume in the virtual world. The virtual avatar should be a realistic height in relation to the scene for a comfortable experience.
However, presenting a realistic avatar body that contradicts the user’s proprioception (e.g., a walking body while they are seated) can be uncomfortable. Generally, users react positively to being able to see their virtual bodies and avatars can serve as a means of eliciting an aesthetic response. User testing and evaluation is critical to see if and how avatars will work for your application.
Any weapons or tools used should be integrated with the avatar, so the user sees the avatar actually holding them. Developers that use input devices for body tracking should track the user’s hands or other body parts and update the avatar to match with as little latency as possible.
Research suggests that providing users with an avatar that anticipates and foreshadows the motion they are about to experience allows them to prepare for it in a way that reduces discomfort. This can be a benefit in 3rd-person games. If the avatar’s actions (e.g., a car begins turning, a character starts running in a certain direction) reliably predict what the camera is about to do, this may prepare the user for the impending movement through the virtual environment and make for a more comfortable experience.
Designing for Hands
===================
Hands are a promising new input method, but there are limitations to what we can implement today due to computer vision and tracking constraints. The following design guidelines enable you to create content that works within these limitations.
In these guidelines, you’ll find interactions, components, and best practices we’ve validated through researching, testing, and designing with hands. We also included the principles that guided our process. This information is by no means exhaustive, but should provide a good starting point so you can build on what we’ve learned so far. We hope this helps you design experiences that push the boundaries of what hands can do in virtual reality.
The Benefits
------------
People have been looking forward to hand tracking for a long time, and for good reason. There are a number of things that make hands a preferable input modality to end users.
- Hands are a highly approachable and low-friction input that require no additional hardware
- Unlike other input devices, they are automatically present as soon as you put on a headset
- Self and social presence are more rich in experiences where you’re able to use your real hands
- Your hands aren’t holding anything, leaving them free to make adjustments to physical objects like your headset
The Challenges
--------------
There are some complications that come up when designing experiences for hands. Thanks to sci-fi movies and TV shows, people have exaggerated expectations of what hands can do in VR. But even expecting your virtual hands to work the same way your real hands do is currently unrealistic for a few reasons.
- There are inherent technological limitations, like limited tracking volume and issues with occlusion
- Virtual objects don’t provide the tactile feedback that we rely on when interacting with real-life objects
- Choosing hand gestures that activate the system without accidental triggers can be difficult, since hands form all sorts of poses throughout the course of regular conversation
You can find solutions we found for some of these challenges in our [Best Practices](https://developer.oculus.com/resources/hands-design-bp/) section.
The Capabilities
----------------
To be an effective input modality, hands need to allow for the following interaction primitives, or basic tasks:
- Targeting, which moves focus to a specific object
- Selection, which lets users choose or activate that object
- Manipulation, or moving, rotating, or scaling the object in space
These interactions can be performed directly, using your hands as you might in real life to poke and pinch at items, or they can be performed through raycasting, which directs a raycast at objects or two-dimensional panels.
You can find more of our thinking in our [Interactions](https://developer.oculus.com/resources/hands-design-interactions/) section.
Today, human ergonomics, technological constraints and disproportionate user expectations all make for challenging design problems. But hand tracking has the potential to fundamentally change the way people interact with the virtual world around them. We can’t wait to see the solutions you come up with.
Principles
==========
There are a few things that make hands a unique and unprecedented input modality. They’re less complicated than controllers, whose buttons and analog sticks can require a learning curve for newer users. They don’t need to be paired with a headset. And by virtue of being attached to your arms, hands are constantly present in a way that other input devices can’t be.
But they also come with some challenges: Hands don’t provide haptic feedback like controllers do, and there’s no inherent way to turn them on or off. Here are the principles that helped us come up with solutions to the unique design challenges that hand tracking presents.
Provide Continuous Interaction Feedback
---------------------------------------

Hands don’t come with buttons or switches the way other input modalities do. This means there’s nothing hinting at how to interact with the system (the way buttons hint that they’re meant to be pushed), and no tactile feedback to confirm user actions. To solve this, we strongly recommend communicating affordances through clear signifiers and continuous feedback on all interactions.
- Affordances are what you can do with an object
- Signifiers communicate those affordances to the user, letting them know what they can do with the object
- Feedback is a way of confirming a user’s states throughout the interaction
For example, our system pointer component affords the ability to make selections. Its squishable design signifies that you’re meant to pinch to interact with it. Then, the pointer itself provides feedback as soon as the user begins to pinch, by changing shape and color. Once the fingers touch, the pointer provides confirmation with a distinct visual and audible pop.
You can see more about signifiers and feedback in our [Best Practices](https://developer.oculus.com/resources/hands-design-bp).
Constrain Inputs to Improve Usability
-------------------------------------

Hands are practically unlimited in terms of how they move and the poses they can form. This presents a world of opportunities, but too much possibility can lead to a noisy system.
Throughout these guidelines, you’ll find places where we recommend limitations. We limit which of your hand’s motions can be interpreted by the system, for more accurate interactions. We snap objects to two-dimensional surfaces when rotating and scaling, to limit their degrees of freedom and provide more control. We created components like pull-and-release sliders, to limit movement to just one axis and enable more precise selection.
These limitations help increase accuracy, and actually make it easier to navigate the system or complete an interaction.
Design for a Multi-Modal Future
-------------------------------

We envision a future where people can use the right input modality for the right job in the right context. While hands offer unique new interaction possibilities, they are also an important step toward that future.
With other input devices like controllers and computer mice, batteries can die or you may need to put them down to pick something else up. In those scenarios, you can use your hands for an uninterrupted experience. So while you’re designing experiences for hands, consider the connective capabilities that this input modality can make room for.
Remember That Hands Aren’t Controllers
--------------------------------------
It’s very tempting to simply adapt existing interactions from input devices like the Touch Controller, and apply them to hand tracking. But that process will limit you to already-charted territory, and may lead to interactions that would feel better with controllers while missing out on the benefits of hands.
Instead, focus on the unique strengths of hands as an input and be aware of the specific limitations of the current technology to find new hands-native interactions. For example, one question we asked was how to provide feedback in the absence of tactility. The answer led to a new selection method, which then opened up the capability for all-new 3D components.
It’s still early days, and there’s still so much to figure out. We hope the solutions you find guide all of us toward incredible new possibilities.
Best Practices
==============
We’ve compiled some of the best practices we’ve learned through researching, testing, and designing with hands. Most of our experiments have been for the system, but the learnings we gathered can be applied to any experiences. Our hope is that this resource saves you time and effort as you learn about this new input modality, so you don’t have to start from scratch when building experiences.
The Pinch
---------

The thumb-index pinch is our preferred method of selection. The contact between your thumb and index finger can compensate for the lack of tactile feedback that other input devices typically provide.
The pinch also works well because it’s simple to perform and easy to remember. Even more importantly, it’s not a hand pose that’s frequently used, which makes it less likely that the system will pick up unintended triggers.
Note: Pinches have three states: Open, closing, and closed. The “closing” state is a way to provide confirmation of a user’s intent to act, which reassures them that the sensors are picking up their motion. It also prevents unintended selections if their thumb and index finger accidentally approach each other.
Signifiers and Feedback
-----------------------
As mentioned in our [Principles](https://developer.oculus.com/resources/hands-design-principles/) section, hands don’t provide tactile feedback the way controllers and other input devices do. It’s important to compensate for this with both visual and audio throughout user interactions.
We think of this in two ways: Signifiers communicate what a user can do with a given object. Feedback confirms the user’s state throughout the interaction. Visually, objects can change shape, color, opacity and even location in space (like a button moving toward an approaching finger to signify that it’s meant to be pushed). You can also play with volume, pitch and specific tones to provide audio feedback. These elements can work together to guide users seamlessly as they navigate the system.
### Components
Each of the components that we experimented with for hand tracking was designed to provide continuous feedback throughout user interactions.
- System Pointers look squishable to signify that you’re supposed to pinch to interact with them. They also change shape as the thumb and index finger approach each other and complete the pinch
- Buttons begin reacting as soon as they’re being targeted, rather than only reacting once they’ve been pressed
- Cursors change size and color throughout the user interaction
- Hands change states to let users know when their hands are being actively tracked, as well as when they’re about to take action
### Audio Feedback
Computer mice and keyboards provide satisfying haptic feedback with clicks. Controllers provide it through vibration. Since hands don’t have any comparable haptic feedback, we found that it was important to design distinct sounds that confirm when a user interacts with a component.
Note: We also learned that it can be easy to over-correct and create a system that’s too noisy, so be mindful of finding the right balance.
### Self-Haptic Feedback
Using the pinch as a selection method is a way of providing tactile feedback during an interaction. The contact between your thumb and index finger can be thought of as a satisfying proxy for the click of a button, or for feeling the object you’re grasping.
See our [User Interface Components](https://developer.oculus.com/resources/hands-design-ui/) section for more about signifiers and components.
Raycasting
----------

Raycasting is our preferred interaction method for far-field targeting.
- This method of selection fits into the interaction model for apps that were designed for controllers
- It’s ergonomically more comfortable than direct interactions over long periods of time, since it tends to keep users in a neutral body position
- Raycasting allows for multiple forms of feedback, since the cursor and pointer can be designed to confirm interaction states (see our [User Interface Components](https://developer.oculus.com/resources/hands-design-ui/) section).
### Hybrid Ray
We’ve experimented with tracking from the wrist, but we found that hands have some natural tremor that gets magnified over distance when raycasting. To solve this, we use a secondary position on the body to stabilize the raycast.
The optimal point of origin for this secondary position varies depending on whether you’re standing or sitting. For standing experiences a shoulder ray works well, because target objects would most likely be below your shoulders. When seated, target objects are likely at a height that would require raising the wrist uncomfortably high, so an origin point near the hip is a less fatiguing alternative.
However, for most experiences you won’t know whether a user is sitting or standing, and they may even move freely between the two. Our solution was to develop a raycasting model that blends between the shoulder and the hip based on the angle of your gaze.
Tracking
--------
There are certain limitations to computer vision that are important to consider when designing your experiences.

### Tracking Volume
The sensors have a limited tracking volume, which means objects outside of their horizontal and vertical field of view won’t be detected. To make sure a user’s relevant motions are being tracked, try to avoid forcing people to reach outside of the tracking volume. Keep in mind, however, that the hand-tracking volume is larger than the display field of view — so hands may be tracked even when the user can’t see them.

### Occlusion
The more of your hands the headset’s sensors can see, the more stable tracking will be. Try to design interactions in a way that encourages users to keep their palms facing either towards or away from the headset. Tracking will diminish if the fingers are blocked by the back of the hand or curled up in the palm.
It’s also important to avoid the overlap of two hands due to current computer vision limitations. A good way around this is to design interactions that can be performed with just one hand, which has the added benefit of making your interactions more accessible.
Ergonomics
----------

When designing experiences, it’s important to make sure the user can remain in a neutral body position as much as possible. Ideally, users should be able to interact with the system while keeping the arm close to the body, and the elbow in line with the hip. This allows for a more comfortable ergonomic experience, while keeping the hand in an ideal position for the tracking sensors.

Interactions should be made in a way that minimizes muscle strain, so try not to make people reach too far from their body too frequently. When organizing information in space, the features a user will interact with most often should be closer to the body. On the flipside, the less important something is, the farther from the body you can place it.
Hand Representation
-------------------
At the most basic level, hand representation needs to fulfill two functions:
1. Provide a sense of embodiment within the VR experience
2. Let users know what their tracked hands are capable of
Your first instinct might be to create a realistic representation of a human hand, but this can be an expensive and difficult endeavor. Realistic hands often feel uncanny at best, and at worst can make users feel disembodied. Instead, think about what works best for the experience you’re building.
If the visual representation of hands is an important part of your immersive experience, then it’s important to make sure the representation is either anonymous enough for anyone to feel embodied (like an octopus hand), or can be customized by users to suit their needs.
In contexts where your hands are primarily an input rather than a part of the experience, we’ve found that a functionally realistic approach is ideal. This means that the representation itself lets people know what they can do with their hands within a given experience, without requiring a level of detail that can be hard to produce and easy to get wrong. You can see the approach we took to system hands in our \[User Interface Components\](/(/resources/nds-design-ui/) section.

Hand States and Gates
---------------------

Imagine if every time you moved your hand, you accidentally dragged a virtual lamp with it. Establishing gating logic helps avoid that scenario by filtering out common cases of unintended interactions. Here are some examples of gating logic that we’ve had success with.
### Idle State
When your hands are at rest, they’re often still in the tracking field of view. But as hands relax they tend to curl up, which can trigger unintended interaction. To solve this, we’ve experimented with an idle state.
Hands enter an idle state when they’ve been lowered at a specific distance from the head and haven’t moved in a specific amount of time. Then, for a hand to become active again, it must re-enter the tracking field of view and change pinch states. This action is deliberate enough to ensure the hand won’t become active without the user’s intent.
### Pointing State
Unless a user clearly indicates that they want to interact with the system, having a constant cursor and pointer can be distracting. Establishing a pointing state can let the system know when the pointer and cursor are desired.

The hand enters the pointing state when it’s pointing away from the body and toward the panel at a very specific angle. This signals the user’s intent to scroll, browse or select items, which is an appropriate time for the cursor and raycast to appear.
### System Gesture
Hands don’t have a Home button, so we needed to provide users with a way of summoning the system menu while in third-party apps and experiences. To prevent users from accidentally performing the gesture while in immersive experiences, we gated the gesture in two ways: Holding the palm up while looking at it, then holding a pinch.

Raising the hand up toward the face both puts the hands in a good tracking position for the sensors, and is an unusual enough pose that it’s unlikely to be performed accidentally. The hand then hand glows to let the user know they’ve entered this state, so they can lower their hand if this was unintentional. Finally, the user has to pinch and hold to complete the gesture and summon the system.
Note: While most of our interactions are performed through analog control, this system gesture is a good case study for how to make an abstract gesture feel responsive, while keeping gates in mind.
Interactions
============

There are several factors we’ve experimented with when it comes to designing interactions. The Interaction Options section breaks down some of the different considerations you might face, depending on the kind of experience you’re designing. Then, the Interaction Primitives section breaks down the options that work best for specific tasks, based on what we’ve experimented with.
Interaction Options
-------------------
The best experiences incorporate multiple interaction methods, to provide the right method for any given task or activity.
Here we outline the different options to consider when designing your experience.
### Target Distance
Near-Field Components The components are within arm’s reach. When using direct interactions, this space should be reserved for the most important components, or the ones you interact with most frequently.
Far-Field Components The components are beyond arm’s reach. To interact with them, you would need to use a raycast or locomote closer to the component to bring it into the near-field.
Note: A mix of near-field and far-field works for many experiences.
### Interaction Methods
Direct With direct interactions, your hands interact with components, so you’d reach a finger out to “poke” at a button, or reach out a hand to “grasp” an object by pinching it. This method is easy to learn, but it limits you to interactions that are within arm’s reach.
Raycasting Raycasting is similar to the interaction method you may be familiar with from the Touch controllers. This method can be used for both near- and far-field components, since it keeps users in a neutral body position regardless of target distance.
### Selection Methods
Poking Poking is a direct interaction where you extend and move your finger towards an object or component until you “collide” with it in space. However, this can only be used on near-field objects, and the lack of tactile feedback may mean relying on signifiers and feedback to compensate.
Pinching Pinching can be used with both the direct and raycasting methods. Aim your raycast or move your hand toward your target, then pinch to select or grasp it. Feeling your thumb and index finger touch can help compensate for lack of tactile feedback from the object or component.
Note: Using this method also makes for a more seamless transition between near- and far-field components, so the user can pinch to interact with targets at any distance.
### Relationship Between Hand and Output
Absolute With absolute movements, there’s a 1:1 relationship between your hand and the output. The output can include cursors, objects, and interface elements. For example, for every 1° your hand moves, the cursor will move 1° in that direction. This feels intuitive and mirrors the way physical objects behave, but it can be tiring and often limits the interaction possibilities.
Relative With relative movements, you can adjust the ratio of distance between your hand’s movement and how much the output moves. For example, for every 1° your hand moves, the cursor will move 3° in that direction. You can make the ratio smaller for more precision, or increase the ratio for more efficiency when moving objects across broad distances.
Note: For even more efficiency, you can use a variable ratio, where the output moves exponentially faster the more quickly you move your hand. Another option is an acceleration-based ratio, which is similar to using a joystick. If a user keeps their hand in a far-left position and holds it there, the object will continue moving in that direction. However, this makes it more difficult to place an object where you want it, so it’s not recommended for experiences where precise placement is the goal.
### System Control Options
Abstract Gestures Abstract gestures are specific gestural movements that can be used to navigate the system and manipulate objects. You may have seen futuristic versions of this from sci-fi movies and TV shows. While they can be useful for providing abstract function, in reality abstract gestures come with a few drawbacks:
- Abstract gestures are difficult to learn, remember and track
- It can be difficult to choose gestures that users won’t perform accidentally, which can cause unintended triggers
- It’s important to be mindful of what gestures mean in different cultures, to avoid implementing any that are inappropriate or profane.
We recommend using abstract gestures sparingly, and to instead use analog control to manipulate and interact with most virtual interfaces and objects.
Our System Gesture is the only abstract gesture we implemented in our system.
Analog Control When using analog control, your hand’s motion has an observable effect on the output. This makes interactions feel more responsive. It’s also easy to understand — when you move your hand to the right, and the cursor, object, or element moves to the right. Raycasting and direct interactions are both examples of analog control.
These interaction options work together in different ways, depending on the circumstances. For example, if your target objects are in the far-field, your only available interaction method is raycasting (unless you bring the object into the near-field). Or for direct poking interactions with near-field objects, your only option for hand-output relationship is absolute.
This chart helps lay out the available options for different circumstances.

Interaction Primitives
----------------------
As we said in the [Introduction](https://developer.oculus.com/resources/hands-design-intro/), hands need to allow for 3 interaction primitives, or basic tasks, to be an effective input modality.
As you’ll see, some of the above interaction options may work better than others for your specific experience.
### Select Something
There are two kinds of things you might select: 2D panel elements, and 3D objects. Poking works well for buttons or panel selections. But if you’re trying to pick up a virtual object, we’ve found that the thumb-index pinch works well, since it helps compensate for the fact that virtual objects don’t provide tactile feedback. This can be performed both directly and with raycasting.

### Move Something
If the target is within arm’s reach, you can move it with a direct interaction. Otherwise, raycasting can help maintain a neutral body position throughout the movement.
Absolute movements can feel more natural and easy, since this similar to how you move and place items in real life. For more efficiency, you can use relative movements to move the object easily across any distance or to place them in precise locations.

### Rotate Something
If you’re looking for an intuitive rotation method and aren’t too worried about precision, you can make objects follow the rotation of a user’s hand when grasped.
A more precise method of rotation is to snap the object to a 2D surface, like a table or a wall. This can limit the object’s degrees of freedom so that it can only rotate on one axis, which makes it easier to manipulate. If it’s a 2D object, you can similarly limit its degrees of freedom by having the object automatically rotate to face the user.

### Resize Something
Uniform scaling is the easiest way to resize an object, rather than trying to stretch it vertically or horizontally.
Similar to rotation, an easy method for resizing is to snap an object to a 2-dimensional surface and allow it to align and scale itself. However, this limits user freedom, since the size of the object is then automatically determined by the size of the surface.
To define specific sizes, you can also resize objects using both hands. While your primary hand pinches to grasp the object, the second hand pulls on another corner to stretch or shrink the object. We found this to be problematic for accessibility reasons, as people may have difficulty with hand dexterity, or their second hand might be occupied. Plus, this method increases the likelihood of your hands crossing over each other, which leads to occlusion.
Handles Another method for manipulation is to attach handles to your object. This provides separate handles that control movement, rotation, and resizing, respectively. Users pinch (either directly or with a raycast) to select the control they want, then perform the interaction.
This allows users to manipulate objects easily regardless of the object’s size or distance. Separating movement, rotation, and scale also enables precise control over each aspect, and allows users to perfect the object’s positioning and change its rotation without the object moving around in space. However, having to perform each manipulation task separately can become tedious, particularly in contexts where this level of precision is not necessary.

User Interface Components
=========================
Hands don’t come with buttons or switches the way other input devices do. To compensate for this, we experimented with components that provide continuous feedback throughout users interactions.
You can read more about affordances, signifiers, and feedback in our [Best Practices](https://developer.oculus.com/resources/hands-design-bp) section.
Buttons
-------
Buttons are one of the most common components out there, so we’ve done a lot of design and research investigating how to interact with them in the absence of tactile feedback.
To provide continuous feedback, buttons should change states throughout an interaction, starting as soon as the hand approaches.

Button States:
1. Default: the body of the button with text
2. Focus: a frame around the button provides a reference for movements of the button between states.
3. Hover: the body of the button moves toward the finger as the finger approaches
4. Contact: the button changes color or receives a touch mark when touched
5. Action: the button moves backward through the frame when pressed
6. Released: the body of the button moves back to its default state once the finger breaks contact

### Near Field Buttons
Button Size: Buttons should be at least 6 centimeters or more on each axis. Targets smaller than 5 centimeters quickly drop off in accuracy and speed, and there’s somewhat of a plateau above 6 centimeters.
Collider Shape: Include a collider area just outside the button to confirm near-misses as selections. Squares or rectangles tend to perform better than rounded edges.
Layout Density: Allow 1 to 3 centimeters between buttons to maximize performance and minimize misses. Any more space than that will have a negative effect on accuracy.
Shape: The visual shape of a button has no effect on performance, as long as it’s confined in a rectangular collision boundary.
Distance: Buttons placed at 80% of a user’s farthest reach is ideal. Assuming a median arm length of 61 centimeters, the distance should be 49 centimeters from the center eye of the headset.
### Far Field Buttons
Angular size: This is the size of the button regardless of distance, and should be no smaller than 1.6 degrees.
For far-field buttons, it’s important to make sure that targeting precision doesn’t suffer as a button gets further away. To do this, we calculate buttons in angular size, which measures the proportion of your field of view that the button takes up. For example, if a target is 2 meters away and its angular size is 1.6 degrees, the target diameter would be 5.6 centimeters. At 10 meters, the target diameter would be 28 centimeters.
Pinch-and-Pull Components
-------------------------
The pinch-and-pull handle became possible thanks to the introduction of the pinch as an interaction method. The handle can be pinched, held and then released to make precise selections. This component feels more satisfying than a virtual push button because there is no expectation for resistance when releasing an elastic string.

The pinch-and-pull handle was the foundation for a few additional components:
1D Picker These are components that move along one axis, like a slider, a simple menu, or a time scrubber for videos.
2D Picker Pulling the handle out can also allow users to make selections along 2 axes for things like a color picker or a radial selector. The full palette of options is revealed once the user pulls the handle outward, and the handle can be moved left, right, up, and down to select the desired option.
3D Picker The space opened up by pulling the handle can be used to provide further options, giving a user the ability to make selections across 3 axes. We experimented with a volumetric color picker, where the user can move the handle left, right, up and down to select colors, and forward or backward in space to select shade.
Pointer
-------
A short, more directional pointer can encourage users to focus on the cursor rather than the raycast, which improves accuracy when raycasting with hands.

The pointer’s squishable shape signifies that you’re meant to pinch to interact with it, and its color, opacity and shape change throughout the open, almost-pinched and pinched states.

Cursor
------
The cursor provides continuous feedback on the state of the pinch, which allows the user to focus on what they’re targeting instead of having to shift their gaze toward their hands. The fill, outline and scale of the cursor change throughout the open, almost-pinched and pinched states. The cursor also visually pops when the pinch occurs, and remains in the same state until the pinch is released.
Note: We designed our cursor with a semi-transparent fill to ensure that it doesn’t obscure targets, and a secondary dark outline to make sure it’s visible against bright backgrounds. The cursor inversely scales based on distance to maintain a consistent angular size.
Hands
-----
The hands we designed consist of two elements — a static fill with a dynamic outline.

The dynamic outline changes color to provide continuous feedback between the open, almost-pinched, and pinch states.
System Pose When the palm faces the user to perform the system gesture, the outline is blue. The pinching fingers turn light blue as they start to pinch.
Pointing Pose When the hand is facing outwards and targeting an object, the outline is light grey. The pinching fingers turn white as they start to pinch.
Note: For the fill, we used a translucent dark grey with a fresnel-based opacity, which gives the hand presence in bright environments without obscuring what’s behind it. The fingers fully occlude each other when overlapping (or else you’d be staring down the inside of your fingers). Both the fill and outline fade out at the wrist, since the wrist’s angle isn’t tracked and regularly breaks immersion and presence.
**
Insight SDK Tips and Tricks
===========================
Passthrough allows developers to build mixed reality applications that blend a user’s physical world with virtual elements. Our goal in this document is to provide you with tips and tricks for using Passthrough to create immersive, mixed reality experiences that seamlessly blend virtual content with the user’s real world in a safe and engaging way that will meet users’ expectations and help protect them as they interact with both physical and virtual worlds.
Health and Safety Tips
----------------------
The following guidelines are a non-exhaustive list of important design considerations for Passthrough enabled applications. In order to ensure the health and safety of users some of the most important considerations are the user’s playspace environment, the duration of their direct exposure to Passthrough and their perception of occlusions with virtual content.
### Playspace
While Passthrough provides the ability for users to ‘see’ their physical environment, it is not a substitute for the safety mechanisms provided by the Guardian boundary.
- Passthrough enabled apps should only be used within the user’s Guardian boundary.
- Users will have cleared their playspace while creating their Guardian boundary and should not be asked to bring physical objects into their cleared space.
- Avoid placing content that the user will closely interact with near the Guardian boundary to minimize chances of users either inadvertently hitting objects outside the boundary or being encouraged to cross the boundary.
### Duration of Exposure
For the purposes of these guidelines, we differentiate between content the user will perceive as ‘real’ and content the user will perceive as ‘virtual’. Virtual content in this context consists of any virtually rendered asset that is not provided by Passthrough.
- It is important not to leave users in a ‘Passthrough only’ environment for extended durations of time without virtual content for users to focus on. As a result it is necessary to avoid experiences with long periods of ‘Passthrough only’ exposure prior to providing virtual content for users to focus on.
- Hours-long exposure to Passthrough may result in visual discomfort or visually induced motion sickness.
- Suggest users take a break if they feel discomfort and not start again until they no longer feel discomfort.
- Potential confusion of spatial relationships between physical and VR objects may result in visual discomfort, disorientation, or temporary, negative after-effects over time.
### Occlusions with Virtual Content
- In the real world, if an object is sitting in front of another, it blocks it from view. And while Passthrough gives the impression that the user can see the physical world as it is, the distance and occlusion relationship breaks down. As shown below, the virtual content can be placed farther away than the obstacle while blocking that obstacle from view. Users will not intuitively understand how the images of the physical world and the virtual content are layered; the space between themselves and the virtual content can be mis-perceived as empty.

The left-hand image above shows how the panel occludes the lamp shade and appears at a distance near the lamp post, shown as a red line in the right-hand image.

In the image above, the virtual screen can appear both in front of or behind the lamp due to occlusion and depth (mis)perception.
With Passthrough, users’ perception that they are able to see their environment as well in addition to the virtual content within it can cause them to believe they are more aware of their surroundings than they actually are. In the scenario above, a virtual panel is placed in the user’s view (left). The panel is rendered at a distance near their lamp, which they choose not to exclude from their Guardian boundaries because they believe with Passthrough they will be able to see and avoid it (right).
- As shown above, the opacity of the virtual window occludes the lamp shade, causing the user to believe the lamp shade is behind their virtual panel. As they approach the panel/lamp, the (mis)perception of depth leads them to believe they will not walk into the lamp until the moment they collide with the lamp.
- To avoid scenarios such as this, considerations should be made for how much area a virtual asset will occupy of a user’s field of view at any given time. It is important to note that even small objects viewed at close distances can occupy a large field of view, and it may be necessary that virtual objects maintain a consistent distance from the user such that they do not obscure real world objects in the user’s physical environment as they move about it.
Design Tips and Tricks
----------------------
The tips and tricks below offer non-exhaustive examples of ways you can use Passthrough to build immersive MR experiences, increase user’s real world awareness in MR and decrease onboarding friction for users new to VR. Think of Passthrough as a way to show (or not show) the user their full or partial physical environment from their virtual reality, or as a way to overlay a virtual world on top of their physical reality.
### Passthrough as a Background
Apps can utilize Passthrough as a background while rendering virtual objects. This can be used to increase real world awareness, reduce onboarding friction for users new to VR or to build AR-like experiences.

#### Increase Real World Awareness
Passthrough can enable users to see the real world, navigate it, and interact with real and virtual objects - this can help you build apps that improve real world awareness while enjoying virtual experiences. Consider the productivity application below, which provides virtual office tools at the user’s actual workspace.

#### Reduce VR Onboarding Friction
Consider the user journey through this spectrum of experiences. An application can start in Passthrough, then take the user into a fully immersive, virtual world before returning to their own living room. This can help increase adoption of VR applications for users new to the experience.

### Passthrough Windows
Selective Passthrough in Unity and Native allows developers to create windows into the real world from VR. This can help you to show only partial windows into the user’s physical environment and also help frame transitions between physical and virtual realities.
Tips:
- When presenting users with multiple realities, a good rule of thumb is to separate them and clearly mark the transition.
- The exception to this rule is when a 1:1 match exists between virtual and physical spaces. In these cases, the effect can be magical.
The following image showcases the framing of transition between realities. Generally speaking, overlapping two distinct realities (the physical and virtual worlds) can be confusing for users and is discouraged.

In the following image, a virtual frame is used to separate the user’s physical environment from a virtual place.

The following examples illustrate how the failure to separate and clearly mark the transition from virtual to physical spaces leads to a jarring and disorienting experience, causing users to feel confused and unsafe:

The following images show clear separation along natural borders or full screen blends where virtual and real spaces align - revealing delightful experiences. 
### Passthrough Mixed-Reality and Colors
On Quest and Quest 2, the camera sensors are black and white. Passthrough can either be shown as-is or stylized to achieve an artistic look.
Tips:
- When using Passthrough on Quest, consider a visual language where grayscale means real and colored means virtual. Colors then become a driver of attention toward gameplay or storytelling elements.
- Stylization effects can be animated, providing means to highlight particular moments to the user, such as transitioning from the real world to a virtual one or reaching an achievement.
In the following image, a portal is about to open in the middle of the room. Color is used to draw the users attention.

### Realistically Render Virtual Content with Occlusions and Shadows
Occlusion and shadows allow apps to create convincingly real virtual content in the physical world, creating more immersive Mixed Reality experiences.
Tips:
- Without occlusion, VR objects just appear as floating inside Passthrough. Apply proper occlusion by masking parts of your virtual object that ought to appear behind Passthrough objects.
- Place a halo or a drop shadow on the floor to “pin” an object to a particular space in Passthrough.

### Outlines over VR
Despite the previous suggestion, you may desire to blend Passthrough over virtual content. Stylization can help achieve more pleasant and less distracting results.
Tips:
- Outlines are easier to blend over immersive virtual environments, as they avoid creating a bias toward either bright or dark regions of the passthrough image.
- The color of the outlines can be chosen to contrast with the virtual background.
- This effect is primarily achieved with Passthrough set in the Overlay mode.
In the following image, a user is immersed in a virtual environment, but can see their own hands and nearby environment rendered as subtle lines.

### Naive Lighting
The Insight SDK allows developers to control the blending of Passthrough and their app. By smartly playing with the transparency of a black background, applications can create the illusion that the environment is actually being lit and dimmed.
Tips:
- Lighting Passthrough is still an open area for exploration, with no single way to create the illusion. Experiment by blending the gradient mask, grayscale re-lighting, tinting the Passthrough images, adjusting the camera’s background opacity or stylizing Passthrough edges to achieve the desired effect.
The following examples demonstrates naive lighting with Passthrough. In the left-hand image, Passthrough is dimmed everywhere except for a cone of light attached to a flashlight the user holds in their hand. In the right-hand image, Passthrough is dimmed around a board game on a table, drawing the user’s attention to the game.
