# Introduction to Virtual Medical Ontology
This guide will help you get started with development for the Virtual Medical Ontology.
This software provides a convenient process of displaying and manipulating large data structures in a virtual reality environment. Virtual Medical Ontology reads from a file and creates a graphical structure using the data. Virtual Medical Ontology grants solutions to many technical problems limited to a conceived screen real estate. A Virtual Reality headset is required in order to use the software.
The software is developed around Unity’s XR Interaction Toolkit Package for VR support. Documentation of the package can be found here: [XR Interaction Toolkit](https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit@0.9/manual/index.html)
# Getting started with Virtual Medical Ontology
### File Reader/Ontology Loader
Plug in your Virtual Reality Headset and run the software by pressing the play button on the top of the window within Unity. Within the software, open up the File Explorer located in the center of the cart and hover over the file reader option with your hand. Press the trigger on your VR controller in order to interact with the UI and access the file explorer, here you can open up a .csv file that contains the data of the ontology to be displayed in a graphical tree structure.

##### Figure 2-A File Explorer UI
More information on the Ontology Loader script can be found in the code documentation here: [Ontology Loader Script]()
# XR Rig
Within the XR Rig game object, the XR Rig component acts as the user’s senses in the virtual environment. For this project, the XR Rig that is currently being used is the Room-Scale XR Rig.
More information on the XR Rig can be found here: [XR Rig](https://learn.unity.com/tutorial/configuring-an-xr-rig-with-the-xr-interaction-toolkit#60340fd5edbc2a50f8484099)
The software currently makes use of Action-based behaviors, which is made possible using the Input Action Manager component.
For movement, the software makes use of the Continuous Move Provider, which enables smooth translations of the XR Rig by a certain amount over time by shifting the analog stick on the left controller.
More information on the Continuous Move Provider and the Locomotion System can be found here: [Locomotion](https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit@0.10/manual/locomotion.html)
The XR Rig includes GameObjects for a set of hand controllers and camera.
Here is a our modified version of the XR Rig prefab, including the [cart](#) and wastebasket:

>2 pairs of [hands](#hands) controllers exist, 1 for raycast and 1 for physical hands and one for raycast hands, to be toggled back and forth by holding down the [A button](#)
## Control Scheme

---
# Cart
The cart is a child of the XR Rig within the scene. This allows the cart to move along with the user for convenience. When expanded, it can be seen to include slots and buttons that allows access to multiple tools and for displacing nodes. The slots and buttons can be seen in the figure below.

##### Cart Slots
A - Node Socket 1
One of 3 slots to place nodes in.
B - Node Socket 2
One of 3 slots to place nodes in.
C - Node Socket 3
One of 3 slots to place nodes in.
D - Scope Socket
Slot on the left end of the cart that holds the Scope tool.
E - Scissors Socket
Slot on the right end of the cart that holds the Scissors tool.
F - Glue Socket
Slot in the center of the cart that holds the Glue tool.
G - Scope Return Button
UI button by the Scope slot that returns the scope back to it’s slot when triggered.
H - Glue Return Button
UI button by the Glue slot that returns the glue back to it’s slot when triggered.
I - Scissors Return Button
UI button by the Scissors slot that returns the scissors back to it’s slot when pressed.
Each slot children objects within the cart hold an XR Socket Interactor component used for holding nodes and tools into place via slots. This allows the following actions to be done:
* Grabbing nodes into cart slots
* Moving within the environment with nodes and tools in respective slots.
### SLOTS AND SOCKETS:
* The cart can carry XR interactable objects thanks to the XR socket interactor component. To use the component you need to have an entity with the component and a collider that will be used as a trigger to check for the interactable.
* For the slots on the cart you will need to change the starting interactable to the entity in the world.

* The slots also have another script called Teleport Object that takes two parameters. The slot which will be the slot that the script is assigned to and obj which will be the entity that will be put on that slot when the script is called.

More information on the XR Socket Interactor can be found here: [XR Socket Interactor](https://docs.unity3d.com/Packages/com.unity.xr.interaction.toolkit@0.0/api/UnityEngine.XR.Interaction.Toolkit.XRSocketInteractor.html)
### BUTTONS
* The buttons on the cart call the Teleport object and tell it to use the function doTeleport(). The doTeleport() function takes the rigidbody position of the desired entity and sets it to the position of the slot. This will automatically attach it to the slots socket due to the trigger finding a grab interactable in its slot.

More information on the Teleport script can be found here: [Teleport Script](../api/Global.TeleportObject)
---
# Scope
The Scope tool is a spherical game object used to view nodes and objects that could be hard to view from a distance. The scope tool contains a UI handle on the side that acts as a way to control the variable zoom of the scope. The scope can be held with one hand while having it’s zoom distance controlled by the other hand.

##### Scope Tool

##### Scope > Camera Component
The figure above shows the properties within the Camera component within Scope>Camera.
Unity has a special texture called render texture. This texture allows for a camera to output what it sees to an entity modeled with this texture. To allow for a texture to use the camera you need to set the render texture as the target texture for the camera in the scene. After doing so, set the render texture to a material and place it on an entity to see it.

##### Scope>Canvas Component
To use the slider for the zoom in VR you need to use a worldspace canvas that the player can interact with. To implement this you need to go to the canvas and change the render mode to World Space. To allow for interaction on the canvas you must set the Event Camera to the camera you are using for the player. The canvas will also need a tracked device graphic raycaster to allow for the canvas to track input from the XR controller. To do so you just need to add the component to the canvas from the XR package.

##### Scope>Slider Component
This feature uses the UI slider to change the field of view of the camera that the render texture is using. If everything has been set up correctly you will just need to on value changed to the field of view for the scope’s camera. Now when moving the slider you should see the scope zooming in and out.
Tutorial reference used to create the scope tool: [Making A Scope Tool Tutorial](https://codemeariver.wordpress.com/2017/06/30/making-a-sniper-scope-in-vr/)
## Render Texture
Unity has a special texture called render texture. This texture allows for a camera to output what it sees to an entity modeled with this texture. To allow for a texture to use the camera you need to set the render texture as the target texture for the camera in the scene.

##### After doing so you can set the render texture to a material and put it on an entity to see it.
## Worldspace Canvas
To use the slider for the zoom in VR you need to use a worldspace canvas that the player can interact with. To implement this you need to go to the canvas and change the render mode to World Space. To allow for interaction on the canvas you must set the Event Camera to the camera you are using for the player. The canvas will also need a tracked device graphic raycaster to allow for the canvas to track input from the XR controller. To do so you just need to add the component to the canvas from the XR package.

## Zoom
This feature uses the UI slider to change the field of view of the camera that the render texture is using. If everything has been set up correctly you will just need to on value changed to the field of view for the scope’s camera. Now when moving the slider you should see the scope zooming in and out.

## XR Grab Interactable
The Scope tool - as well as the nodes - have an XR Grab Interactable component attached to them. The XR Grab interactable is a component that allows the user to grab the gameObject as one would a physical object.The Grab interactrable allows the user to pick up, drop, throw or place into a socket using either the direct (Hand) or Ray interactors.

##### XR Grab Interactable Component
### Example Grabbable Object Creation
Assuming that the required gameObject to be interacted with it created, all that is left to do is to add XR Grab Interactable component onto the object itself. Once this is done, the object can be freely interacted with.
To change the location that this object is being held by - or the 'handle', create an empty gameObject as a child of the aforementioned object. Then position the child object to the desired location. Once that it done, drag and drop the child object fromt he Hierarcy into the Attach Transform property of the parent Grab Interactable component.
Here is a link to a tutorial on using Interactors that could help with familiarizing with this project: [Interactor Tutorial](https://learn.unity.com/tutorial/using-interactors-and-interactables-with-the-xr-interaction-toolkit)
---
# Scissors
The scissors tool is an interactable object that can be accessed on the cart by the grab function on a VR controller. The scissors is one of two tools that contain scripts derived from the Tool class. The scissors allows the user to cut the edges that hold the relation between one node to another.
### Cutting Edges
Removing Edges and the parent/child relation between two nodes can be done using the scissors tool. Instructions for use are as follows:
1. Pick up scissors from the cart with the grab button on VR controllers. (usually the back trigger)
2. Hover over an edge, causing the edge to be highlighted in red.
3. Perform the trigger selection by pressing the trigger to cut the highlighted edge.

##### Scissor Script
The Scissors script is responsible for the removal of the edges and the cutting animation performed when triggering the cutting action with the triggers on the VR controller. As mentioned before, the scissors script derives from the Tools class/script, which can be seen for the glue tool as well. This Tool class is further derived from the XR Grab Interactable, hence the similarities in properties within the component.
For the animation that is seen when performing the trigger action (opening/closing) the following code is used:
```csharp=
/// Opens the scissors smoothly.
private void OpenScissors() {
_scissorsA.DOTransformState(_scissorsAOpenState, _cutTime);
_scissorsB.DOTransformState(_scissorsBOpenState, _cutTime);
}
/// Closes the scissors smoothly.
private void CloseScissors() {
_scissorsA.DOTransformState(_scissorsAClosedState, _cutTime);
_scissorsB.DOTransformState(_scissorsBClosedState, _cutTime);
}
```
Since the scissors consist of two objects (one for each edge) the main object is seperated into two children objects - Scissors A and Scissors B. The rotation transform of the two parts are then placed into their respective Open and Close state, this is to mimic the animation of the scissors closing and opening when performing the trigger action.
The following preview shows the necessary code that calls the Close and Open states - along with the state the scissors take on initial grab.
```csharp=
/// Called whenever this Tool starts being used while held by a controller. Closes the Scissors and attempts to cut an edge.
protected override void OnUseStart(ActivateEventArgs args) {
base.OnUseStart(args);
CloseScissors();
RemoveEdge();
}
/// Called whenever this Tool stops being used while being held by a controller. Opens the Scissors.
protected override void OnUseEnd(DeactivateEventArgs args) {
base.OnUseEnd(args);
OpenScissors();
}
/// Called whenever this Tool is grabbed by a controller. Opens the Scissors.
protected override void OnGrab(SelectEnterEventArgs args) {
base.OnGrab(args);
OpenScissors();
}
/// Called whenever this Tool is released by a controller. Closes the Scissors.
protected override void OnRelease(SelectExitEventArgs args) {
base.OnRelease(args);
CloseScissors();
}
```
The OnUseStart, OnUseEnd, OnGrab and OnRelease functions are functions derived from the parent Tool class. These all make sure of the Activate, Deactivate, Enter and Exit events that the XR Interaction Toolkit makes available, which checks whether the respective grab/trigger button is being pressed.
```csharp=
private void OnTriggerEnter(Collider other) {
if (!other.CompareTag(_edgeTag)) {
return;
}
var parent = other.transform.parent;
if (parent != null && parent.TryGetComponent(out EdgeObject edge)) {
_oldEdgeColors[edge] = edge.Color;
edge.Color = _hoverColor;
if (_hoveredEdges.Count > 0) {
_hoveredEdges[_hoveredEdges.Count - 1].Color = _hoverColor;
}
_hoveredEdges.Remove(edge);
_hoveredEdges.Add(edge);
_hoveredEdges[_hoveredEdges.Count - 1].Color = _lastHoverColor;
}
}
private void OnTriggerExit(Collider other) {
if (!other.CompareTag(_edgeTag)) {
return;
}
var parent = other.transform.parent;
if (parent != null && parent.TryGetComponent(out EdgeObject edge)) {
if (_oldEdgeColors.TryGetValue(edge, out var color)) {
edge.Color = color;
_oldEdgeColors.Remove(edge);
}
_hoveredEdges.Remove(edge);
if (_hoveredEdges.Count > 0) {
_hoveredEdges[_hoveredEdges.Count - 1].Color = _lastHoverColor;
}
}
}
```
The code above is used to create the highlighting of edges when the scissors enter the collider of the desired edges. When existing the edges, it turns back to it's former color. The edges highlighted are placed into a _hoverEdges list that is further used when removing the edges as a whole.
```csharp=
/// Removes the most recently hovered edge within range of the scissors (if one exists).
private void RemoveEdge() {
if (_hoveredEdges.Count == 0) {
return;
}
var lastEdge = _hoveredEdges[_hoveredEdges.Count - 1];
lastEdge.End.RemoveEdge(lastEdge);
_hoveredEdges.RemoveAt(_hoveredEdges.Count - 1);
if (_hoveredEdges.Count > 0) {
_hoveredEdges[_hoveredEdges.Count - 1].Color = _lastHoverColor;
}
}
```
The code above causes the removal of edges, it takes the _hoveredEdges list and removes the last edge that was highlighted. This eliminates any issues that appear when dealing with colliders that stack due to multiple edges being highlighted.
For code documentation for the scissors tool visit here: [Scissors Doc](../api/Global.Scissors)
---
# Glue
The glue tool is an interactable object that can be accessed on the cart by the grab function on a VR controller. The glue is the second of the two tools that contain scripts derived from the Tool class. The glue allows the user to create edges that link two nodes together, creating a parent-child relation for one another.
### Attaching Nodes
Creating relations between two nodes and producing edges between the two can be done using the glue tool. Instructions for use are as follows:
1. Pick up glue tool with the grab button on VR controllers.
2. Hover tool over a node, causing the node to be highlighted in white.
3. Perform the trigger selection by pressing and holding the trigger to create the parent end of the edge.
4. While still holding the trigger, move to the desired child node of the relation and release the trigger button while hovering over the second node (desired child node).
5. If performed correctly, the two nodes should have an edge connecting one another.

##### Glue Script
The Glue script acts similarly to the scissors script, however it is responsible for the creation of the edges between two nodes when performing the gluing action with the triggers on the VR controller. The glue script derives from the Tools class/script, which can be seen for the glue tool as well. This Tool class is further derived from the XR Grab Interactable, hence the similarities in properties within the component.
```csharp=
/// Called whenever this Tool starts being used while held by a controller. Attempts to start creating an edge.
protected override void OnUseStart(ActivateEventArgs args) {
base.OnUseStart(args);
if (_hoveredNodes.Count == 0) {
return;
}
_hasSelectedParent = true;
_parentNode = _hoveredNodes[_hoveredNodes.Count - 1];
if (_currentArrow == null && _arrowPrefab != null) {
_currentArrow = Instantiate(_arrowPrefab).transform;
UpdateArrow();
}
}
```
The code snippet above creates a temporary arrow object that moves along with the controller position. The UpdateArrow function is then called which constantly updates the rotation and position of the fake arrow object. The code for the UpdateArrow function is as follows:
```csharp=
/// Updates the position, rotation, and scale of the intermediate edge.
private void UpdateArrow() {
if (_currentArrow == null || _parentNode == null) {
return;
}
_currentArrow.position = (transform.position + _parentNode.transform.position) / 2f;
_currentArrow.localScale = new Vector3(
_currentArrow.localScale.x,
_currentArrow.localScale.y,
(transform.position - _parentNode.transform.position).magnitude / 2f
);
_currentArrow.LookAt(transform.position);
}
```
The code that runs on trigger release:
```csharp=
/// Called whenever this Tool stops being used while being held by a controller. Attempts to create an edge between nodes.
protected override void OnUseEnd(DeactivateEventArgs args) {
base.OnUseEnd(args);
if (_currentArrow != null) {
Destroy(_currentArrow.gameObject);
_currentArrow = null;
}
if (_hoveredNodes.Count > 0) {
_childNode = _hoveredNodes[_hoveredNodes.Count - 1];
}
GlueEdge();
Color color;
if (_parentNode != null && _oldNodeColors.TryGetValue(_parentNode, out color)) {
_parentNode.Color = color;
_oldNodeColors.Remove(_parentNode);
}
if (_childNode != null && _oldNodeColors.TryGetValue(_childNode, out color)) {
_childNode.Color = color;
_oldNodeColors.Remove(_childNode);
}
_parentNode = null;
_childNode = null;
}
```
The first part of the code checks whether the trigger press is being released outside of a node's collider - as if to attempt to cancel the selection. If the user hovers on a new node, it is added to the _hoveredNodes list, checked and GlueEdge() is called which attempts to create the actual edge between the highlighted Nodes. The rest of the code functions to change the color of the node to indicate that it is highlighted. In the end the code resets _parentNode and _childNode.
```csharp=
/// Attempts to create an edge between the two selected nodes (if they exist).
private void GlueEdge() {
if (!_parentNode || !_childNode) {
return;
}
if (_parentNode == _childNode) {
return;
}
if (_parentNode.GetEdge(_childNode) == null) {
var newEdge = TreeObject.Instance.AddChild(_parentNode, _childNode);
newEdge.ConnectToNodes();
}
}
```
The code above functions to create the actual connection between the parent and child Nodes. Before this, only a fake arrow was created to act as a visual indicator for the user when creating an edge. The GlueEdge function creates an actual Edge by using the .AddChild function found within the TreeObject class that adds an edge between the specified nodes to the tree. The ConnectToNodes function is then called in order to control the transform of the Edges which allows it to attach to the Nodes and anchors to their respective movements.
More information for AddChild can be found here: [TreeObject Doc](../api/GraphTheory.TreeObject)
More information for ConnecToNodes can be found here: [EdgeObject Doc](../api/GraphTheory.EdgeObject)
For more information on the glue tool script visit here: [Glue Doc](../api/Global.Glue)
---
# Nodes
Nodes are game objects found under a NodeContainer within the tree structure. Nodes have information attached to them. This information includes labels, relationships and descriptions. Accessing nodes within the project are as follows:
1. Hover over node objects using VR controllers.
2. Perform trigger press to open up Node UI Menu
More information on the Node UI Menu can be found here: [`Node UI Menu`](#Node-UI-Menu)

##### Node Object

##### Node Component
Figure 2-E displays the node object within the project. The information of the Node Object can be found in the properties of the component which are data pulled from the .csv file. Information on the properties are listed below:
- Color: Act as an easier way to identify the level of the node - through a gradient of decreasing contrasts.
- UniqueName: displays the varied ID attached to the node.
- Properties: further information of the Node object can be identified such as the definition of the node.
- Parent: lists the parent the current node is related to
- Children: show the number of child nodes relate to this node.
More information on the Nodes script can be found in the document here: [Node Script]()
## Wastebasket
The wastebasket, a part of the [XR Rig](#), contains all nodes that are not indirectly connected to the root node.

# Node UI Menu
The Node Ui menu can be accessed by performing a trigger press using the VR controller on any node object. The action options that can be done within the node UI can be seen on the figure below.

##### Node UI Menu
The action options that can be done within the Node UI Menu is as follows:
- View Properties: Changes the UI menu to show all the information that the current node holds. Below is an image of what the view properties action looks like.
- 
##### View Properties UI Menu
- Move Node: Place the current node into an open cart slot.
- Cut Node: Removes all edges and relations to this current node.
- Expand: Expands the collapsed bundle, revealing all the children nodes related to the current node.
- Collapse: Collapses all children related to the current node into one bundle.
More information on the Node UI Menu script can be found in the document here: [Node UI Menu Script]()
# Edges
Edges are responsible for displaying the connection between two nodes. Edges are displayed as blue lines with starting points on parent nodes and ending on the children. Below is an image of the components found in an edge.

##### Edge component
More information on the edge object can be found within the Edge Object script. The main properties you should be aware of and their explanation are as follows:
- Start: The starting node that the edge object is connected to (parent).
- End: The end node which the edge object is pointing towards (child).
More in depth information on the Edge Object script can be found in the document here: [Edge Object Script]()
# Graph Structure
The graph structure is a graphical representation of the ontology within the scene, providing a simple viewing experience for the user. The graph structure takes the form of a horizontal tree spreading nodes out from the root note. Below is an image of the graph structure.

##### Figure 2-K Tree Structure in a Radial Layout
## Creating a Graph
This project separates the graph structure model from the visual aspect. To create a new graph structure, use the class `GraphTheory.Graph`. If the graph should be displayed visually with visible nodes and edges, then use `GraphTheory.GraphObject`, which wraps the graph model internally. Additionally, any class ending with `Object` in the `GraphTheory` namespace handles the visual aspect and the internal structure.
### Creating a Graph
To create a `Graph`:
```csharp
GraphTheory.Graph graph = new GraphTheory.Graph();
```
### Creating a GraphObject
To create a `GraphObject`, just drag the script onto the inspector of a GameObject. The internal graph structure is created automatically.
The `GraphObject` can then be referenced in code with:
```csharp
GraphTheory.GraphObject graphObj = GetComponent<GraphTheory.GraphObject>();
```
## Creating a Tree
The `Tree` class is a subset of `Graph` and is used specifically for representing tree-like structures. It functions similarly to the graph structures described above. Do not use `Tree` to represent any non-tree structures.
## Creating Nodes
Creating nodes for a graph can be done in the following way:
```csharp
// Create a visual node (also creates a node in the internal structure)
NodeObject nodeObj = graphObj.CreateNode("unique name");
// Create a node for non-visual use
Node node = graph.CreateNode("my node");
```
## Connecting Nodes
To connect nodes in a graph with a new edge:
```csharp
Edge edge = startNode.AddEdge(endNode, 4f, false);
// or for visual connection:
EdgeObject edgeObj = startNodeObj.AddEdge(endNodeObj, 4f, false);
```
Moving an existing edge to point to a different node:
```csharp
startNode.AddEdge(existingEdge, endNode);
```
Deleting a connection:
```csharp
startNode.RemoveEdges(endNode);
```
## Deleting Nodes
To delete a node:
```csharp
graph.Remove(node);
```
To delete a node object:
```csharp
graphObj.Remove(nodeObj);
```
You can also safely use Unity's Destroy() to destroy (Node/Edge)Objects instead:
```csharp
Destroy(nodeObj);
```
## Switching Between Structure Model and Visual Representation
If you have a reference to a structure model graph object or visual representation object, and need to access the other for any reason, it can be done like this:
```csharp
// Getting the visual component of a node
NodeObject nodeObj = graph.GetObject(node);
// Getting the structural component of a node
Node node = nodeObj.GetNode();
```
Be aware that any changes done directly to either the visual or structural representation may not make the correct change to the other component.
## Using NodeGroups
Nodes can be placed in groups, where `NodeGroupObjects` can be used to collapse a group of nodes from view while maintaining the internal structure.
Creating a node group and collapsing from view:
```csharp
NodeGroupObject groupObj = graphObj.CreateNodeGroup(nodeObjList);
groupObj.Collapse();
```
## Dummy Nodes/Edges
Dummy nodes/edges are used to display visual nodes/edges without adding them to the internal structure. These dummy graph objects do not have an internal representation.
Making a dummy node
```csharp
NodeObject nodeObj = graphObj.CreateDummyNode("im a dummy");
```
# Loading an Ontology
Loading ontologies is done with the `Ontology.OntologyLoader` object. Note that the ontology loader is *asynchronous* in order to allow the engine code to run to avoid frame freezing.
The `OntologyLoader` will load the file `Assets/Resources/{File Name}` at start by default, which should probably be changed. Loading an ontology file at runtime can be done with `OntologyLoader.LoadFile()`.

##### Figure 2-L Graph Components
## Configuring the Ontology Loader
To configure the loader for an CSV ontology file, these values must be configured in the inspector:
| Inspector Name | Description |
|---|---|
| Name Header | The column header that contains the unique name |
| Parents Header | The column header that contains the unique name of the parent |
| Label Headers | A list of column headers that are used for the display label of the node.<br/> The first non-empty column value is picked. |
# Keyboard/Searching
The keyboard object consists of multiple child objects representing the keys on the keyboard. In order to access the keyboard the user must first open up any action that consists of a textbox (Node Menu UI/Search). The keys on the keyboard contain labels that display the different letters + key functions.

##### Figure 2-Q Keyboard

##### Figure 2-R Keyboard Script
The figure above shows the keyboard key script attached to each key on the keyboard object. The properties allow the letters to be displayed on the label and be able to be pressed.
More in depth information on the Keyboard/Key script can be found in the document here: [Keyboard/Key Script]()
# Hands
#### 4 Different Controllers for Hands
There are 4 diferent hand controllers, two of which the user can switch from-and-to freely. These are the Hand and Raycast. In order to switch between the two, the user must click the A button on their right device controller.
* **Hand Controller Raycast**: A hand that uses the XR ray interactor and action controller to allow for manipulation of interactable objects.
* **Hand Controller for Typing**: A version of raycast hand that is invisible and shortened to the length of the pointer finger for easier use with UI interactions with non raycast hands.
* **Hand Controller Uses Hand**: An action based controller that uses the direct interactor to interact with the interactable objects.
* **Hand Controller (Device-based)**: This hand is a part of the Hand Controller Uses Hand controller and is used for the sole purpose of animating the Custom hands from the Oculus VR package with the script Hand Anim. Set the interactor’s Select Action Trigger to State to have this fully working with the animation script.

##### Figure 2-R Hand Anim Script
This component is put on the Oculus VR package’s hand model and is then set up by using the animator from that same package and the Device-based controller that was mentioned in the previous section. The code uses CommonUsages as the parameter for changing the animation for the hand model.

##### Figure 2-S Button Watcher Script
The button watchers check to see if a button is being pressed on the controller and if so to use the actions that are set on the list. The watchers also use the device-based controller to check for the state of the buttons. The Timer and Time left parameters are used to set an amount of time that the button must be held to activate the actions in the list. The right hand deals with toggling between the use hand and raycast controllers. The left hand uses a teleport script that is attached to the XR rig that allows the user to move up and down based on which button they press.
>Primary Button: Oculus = A/X
>Secondary Button: Oculus = B/Y
# Gestures
The current implementation of gestures is not very robust or extensible and was meant to be used mainly as a proof of concept. There are only two gestures implemented--an expand gesture and a collapse gesture. Each of these follows the same concept: when both triggers are pulled, the midpoint of the two hands is found. If this midpoint is within a certain range of a Node or NodeGroup, then that object is selected. Then, the distance between the two hands is checked to ensure that they are far enough apart (in the case of the collapse gesture) or close enough together (in the case of the expand gesture). If the user's hand placement is valid, then the corresponding gesture is started and the selected object is highlighted.
Whenever the triggers are released, the midpoint of the hands is again calculated, and the distance to the selected object is checked; if the midpoint is too far from the selected object, then the gesture is canceled. Otherwise, the distance between the hands is checked to ensure that they are close enough together (in the case of the collapse gesture) or far enough apart (in the case of the expand gesture). If the user's final hand placement is valid, then the active gesture is completed, the action is performed, and the object's highlight is removed.
Whenever a gesture is in-progress, the midpoint of the hands is calculated, and the distance to the selected object is checked. If the midpoint is too far from the selected object, the highlight is removed, otherwise, the object remains highlighted. This is to provide visual feedback for the current status of the gesture.
Below are the properties that can be changed in the inspector:

In the future, it might be beneficial to create a `Gesture` class that allows for the creation of more gestures. The `GestureManager` could then act to check to see if any gesture is started, and if so, call some `Gesture.StartGesture()` method for the corresponding gesture. If the gesture is completed or canceled, `Gesture.CompleteGesture()` or `Gesture.CancelGesture()` could be called.
# UI
## UI Manager
The UI Manager exists within the scene and manages instances (and placement) of the Node Menu, File Explorer, Search Menu, Progress Bars, Trash Menu, and Keyboard.
Below is the view of the `UiManager` component within the inspector:

The `NodeMenuPrefab`, `FileExplorerPrefab`, `SearchMenuPrefab`, `TrashMenuPrefab`, and `PhysicalKeyboardPrefab` fields all store references to prefabs of their respective UI elements. Whenever UI elements are accessed from other classes (e.g. `UiManager.NodeMenuInstance`), the UI Manager checks to see if an instance has already been created; if not, the prefab is instantiated, placed under the correct anchor, and a reference to this instance is stored for later use (i.e. a singleton pattern).
Each of these prefabs (and the Progress Bars, which are handled slightly differently) has a corresponding `UiContainer` in the inspector. The purpose of the `UiContainer` is to assign a Canvas object that the UI elements will be parented to, as well as define the 'anchors' for this Canvas that are dependent on the current device. The `PcUiAnchorSettings` allows you to select an object that the Canvas will be parented to, as well as a relative position, rotation, and scale. If the anchor object is left null, then these values are assumed to be in world space. For PC UI, it is often best practice to use the main camera as the anchor, that way the UI is always displayed on the screen. The `XrUiAnchorSettings` allows you to select the anchor and relative properties for when the user is on an XR device. It is often best practice to select an object that moves with the player but does not rotate with the user's head (such as the `XR Rig` or `Camera Offset` objects), that way the UI appears to have physical presence in the world.
The `NodeLayers` and `UiLayers` fields simply allow the user to define the layers that the Nodes/NodeGroups and UI exist on, that way raycasting (when trying to select a Node/NodeGroup) can be performed more efficiently and be blocked by UI.
The `LeftHandTransform` and `RightHandTransform` fields store references to the left and right hand controllers that are used to send rays whenever the corresponding trigger is pressed. If the ray intersects a Node or NodeGroup, `UiManager.ShowNodeMenuForNode(intersectedNode)` or `UiManager.ShowNodeMenuForNodeGroup(intersectedNodeGroup)` is called to make the Node Menu appear and allow the user to view/modify the selected object.
The `OntologyLoader` and `InteractionManager` fields store references to the instances of the two managers within the scene. The `OntologyLoader` reference is used when loading new files within the File Explorer, while the `InteractionManager` reference is stored in the case that information about any of the XR interactors or interactables needs to be obtained (e.g. within `DoTeleport.cs`, `UiManager.InteractionManager.ForceSelect()` is called).
## UI Elements
There are a few abstract classes that can be derived from to simplify the creation of UI menus. An example menu and list menu will be provided at the end.
### MenuPanel
The `MenuPanel` class defines the structure of a menu that can be toggled on and off. It adds support for simple animation when opening/closing the menu. To change where the menu moves when appearing or disappearing, change the `HiddenState` and `VisibleState` fields. To change the default duration of the animation for a menu, modify the `TransitionLength` field. It is possible to show or hide a menu with a custom transition length by calling `MenuPanel.Show(float transitionLength)`, `MenuPanel.Hide(float transitionLength)`, or `MenuPanel.ToggleVisibility(float transitionLength)`. If the transition length is set to 0, then the menu will open and close instantly.
More advanced features of the MenuPanel include callbacks for when the menu completes its show/hide transitions and when the show/hide transitions are canceled (see `MenuPanel.OnShowComplete`, `MenuPanel.OnShowKill`, `MenuPanel.OnHideComplete` and `MenuPanel.OnHideKill`). Additionally, the interpolation method for the transitions can also be modified by changing the `EaseMethod` field.
### UiListMenu
The `UiListMenu` class is a subclass of the `MenuPanel` class, and thus has the same features as listed above. The purpose of the `UiListMenu` is to more easily create custom lists with custom data (e.g. a search results list or Node container list). To inherit the `UiListMenu`, you must supply classes that correspond to the `UiListElement` (which stores the data and manages an element in the list) and `UiListElementArgs` (which contains the data required to initialize an element).
To use the `UiListMenu`, you must supply a parent `Transform` that the elements will be created under as well as a prefab of the element that you will be generating. Then, to create a new element, you can simply call:
```csharp=
ExampleListItemArgs initArgs = new ExampleListItemArgs {
...
}
ExampleListMenu.CreateElement(initArgs);
```
### UiListElement
The `UiListElement` class describes the structure of a list element object. To derive `UiListElement`, you must supply a `UiListElementArgs` subclass as the class that contains the data required to initialize an element. Then, the only method that is required to be defined is `Init(TArgs args)`, which will initialize an instance of the element with the arguments provided.
### UiListElementArgs
The `UiListElementArgs` class is an empty class that should be inherited and given data members.
### Example Menu Creation
As an example, we will create a menu that appears and disappears when a button is pressed and contains a button that prints the current date and time to the console.
First, create a new file called `ExampleMenu.cs` and use the following code:
```csharp=
using UiManagement;
using UnityEngine;
using UnityEngine.InputSystem;
namespace Example {
public class ExampleMenu : MenuPanel {
[SerializeField]
private InputAction _menuToggleAction = null;
private void Start() {
// Actually enable the InputAction so that it will listen for input.
_menuToggleAction?.Enable();
}
private void OnEnable() {
// Ensure that no errors occur if _menuToggleAction is not assigned
if (_menuToggleAction == null) {
return;
}
// Add the ToggleMenu listener.
_menuToggleAction.performed += ToggleMenu;
}
private void OnDisable() {
// Ensure that no errors occur if _menuToggleAction is not assigned
if (_menuToggleAction == null) {
return;
}
// Remove the ToggleMenu listener.
_menuToggleAction.performed -= ToggleMenu;
}
// Just toggle the visibility of this menu using MenuPanel.ToggleVisibility()
private void ToggleMenu(InputAction.CallbackContext context) {
ToggleVisibility();
}
// This will be referenced in the Button's OnClick callback in the inspector.
public void PrintTime() {
Debug.Log(System.DateTime.Now);
}
}
}
```
Then, in your scene, create a Canvas object and create a Panel object (using `GameObject/UI/Panel`) parented to the original Canvas object. Set the Canvas' render mode to `World Space` and assign the main camera as the `Event Camera`. Change the Canvas' position and scale so that it is easily visible by the camera. Your setup should be similar to this:

Create a Button object (using `GameObject/UI/Button - TextMeshPro`) and parent it to the Panel object. Move and scale it so that it is easily visible and clickable. Select the Panel object and add the `ExampleMenu` component that we just created. For the `Hidden State`, we can set the scale to zero so that it looks like the menu pops in. Next, click the `+` button on the right side of the `Menu Toggle Action` and add a binding.
For this example, we'll simply set the binding to the spacebar. To do so, double click the newly-created binding and for `Path`, click `Keyboard/By Location of Key (Using US Layout)/Space`. Now, whenever the space bar is pressed, this `InputAction` is fired, which is bound to the toggling of our new menu. It may be useful to first research Unity's new action-based input system if this is difficult to understand. If done correctly, your scene/menu should look similar to this:

Now we need to make the button functional. To do so, select the Button object, and within the `Button` component, create a new `On Click ()` listener. Drag the Panel object to the `Object` field, and for `Function`, select `ExampleMenu/PrintTime()`.

Lastly, we need to make sure that this button is clickable. This step is important: select the EventSystem object and delete the `StandaloneInputModule` and/or `InputSystemUIInputModule` component(s). Then, add the `XRUIInputModule` component. If this menu will be tested on PC only, then change the `Target Eye` field of the main camera to `None (Main Display)`. Otherwise, ensure that the `Target Eye` field is set to `Both`, and add a `TrackedDeviceGraphicRaycaster` component to the Canvas object.
Now, whenever you click play, the menu should not be visible. By pressing the spacebar, you can make the menu appear/disappear, and clicking the UI button should print the time to the console.
### Example List Menu Creation
Here is an example for a simple List Menu that simply adds a new item to the list containing the current time each time a button is pressed.
Create two new files called `ExampleListMenu.cs` and `ExampleListItem.cs`, and use the following code:
`ExampleListMenu.cs`:
```csharp=
using System;
using System.Globalization;
using UiManagement;
using UnityEngine;
using UnityEngine.InputSystem;
namespace Example {
public class ExampleListMenu : UiListMenu<ExampleListItem, ExampleListItemArgs> {
[SerializeField]
private InputAction _menuToggleAction = null;
[SerializeField]
private InputAction _addAction = null;
private void Start() {
// Actionally enable the InputActions so that they will listen for input.
_menuToggleAction?.Enable();
_addAction?.Enable();
}
private void OnEnable() {
// Ensure that no errors occur if _menuToggleAction is not assigned
if (_menuToggleAction != null) {
// Add the ToggleMenu listener.
_menuToggleAction.performed += ToggleMenu;
}
// Ensure that no errors occur if _addAction is not assigned
if (_addAction != null) {
// Add the AddItem listener.
_addAction.performed += AddItem;
}
}
private void OnDisable() {
// Ensure that no errors occur if _menuToggleAction is not assigned
if (_menuToggleAction != null) {
// Remove the ToggleMenu listener.
_menuToggleAction.performed -= ToggleMenu;
}
// Ensure that no errors occur if _addAction is not assigned
if (_addAction != null) {
// Remove the AddItem listener.
_addAction.performed -= AddItem;
}
}
// Just toggle the visibility of this menu using MenuPanel.ToggleVisibility()
private void ToggleMenu(InputAction.CallbackContext context) {
ToggleVisibility();
}
// Create a new item in this list that displays the time that this button was pressed.
private void AddItem(InputAction.CallbackContext context) {
// Obtain the current date and time as a string.
var currentTime = DateTime.Now.ToString("G", DateTimeFormatInfo.InvariantInfo);
// Create a new ExampleListItemArgs instance containing the current date/time string.
var initArgs = new ExampleListItemArgs {
Text = currentTime
};
// Actually create a list element with the given parameters.
CreateElement(initArgs);
}
}
}
```
`ExampleListItem.cs`:
```csharp=
using TMPro;
using UiManagement;
using UnityEngine;
namespace Example {
public class ExampleListItem : UiListElement<ExampleListItemArgs> {
[SerializeField]
private TextMeshProUGUI _textField = null;
// This method is called to initialize this element.
public override void Init(ExampleListItemArgs args) {
// We are simply setting the value of the text field to a certain value
_textField?.SetText(args.Text);
}
}
// This class defines the necessary data required to create an ExampleListItem
// In this case, we are simply setting some text.
public class ExampleListItemArgs : UiListElementArgs {
public string Text;
}
}
```
Go through the same steps as before to create and configure the menu, Canvas, and EventSystem (except adding the `ExampleListMenu` component to the Panel instead and not creating a Button object). Now, create a ScrollView object (using `GameObject/UI/Scroll View`) parented to the Panel object and ensure that it is visible. Your Panel and ScrollView should look similar to this:

Now we must create a `ExampleListItem` prefab.
Select the `Scroll View/Viewport/Content` object. Add the `VerticalLayoutGroup` component and set the spacing to some nonzero value, such as 10. Select both of the `Control Child Size` boxes, deselect both `Use Child Scale` boxes, and select `Child Force Expand Width` (and not height). Add the `ContentSizeFitter` component and set the `HorizontalFit` field to `Unconstrained` and the `VerticalFit` field to `Preferred Size`.
Create a Panel object parented to the Content object (which will be referenced to as 'Element'). Select the Element object and add the `LayoutElement` component. Enable `Min Height` and `Preferred Height`, and set their values to some height value, such as 100. Then, set the height of the Element itself to the same value. Create a Text object parented to the Element object (using `GameObject/UI/Text - TextMeshPro`) and move/scale it as desired. Select the Element object and add the `ExampleListItem` component. Set the `TextField` field to the newly-created Text object. The Element object should look similar to this:

Click and drag the Element object from the inpsector to a folder within the Project window to create a prefab of this Element object. Once a prefab has been created, delete the instance within the scene (child of the Content object).
Finally, we must make the final configuration to the `ExampleListMenu` component. Select the Menu/Panel object and select the newly-created Element prefab to the `ElementPrefab` field. Then, set the `ElementParent` field to the Content object within the scene. Finally, create a new binding for the `AddAction` field (use a button other than the spacebar).
Now, whenever you click play, the menu should not be visible. By pressing the spacebar, you can make the menu appear/disappear, and pressing the button corresponding to `AddAction` will add a new element to the list.
## Node Menu
The `NodeMenu` class manages showing information about a bound NodeObject or NodeGroupObject. Because it inherits the `MenuPanel` class, it shares many of the settings, such as the `HiddenState`/`VisibleState`, `TransitionLength`, etc.
In addition, an instance of the Node Menu contains several other properties. The `NameField` property stores a reference to a `TextMeshProUGUI` object that will display the name of the Node or NodeGroup. The `NodeMenuPanel` stores a reference to the main menu panel (which contains options to expand, collapse, view, etc. a Node). The `PropertiesPanel` field stores a reference to the properties list that is automatically updated for the selected Node. (Side note: this list does not use the `UiListMenu` class, though this is simply due to a lack of time). The `NodePropertyPanelPrefab` stores a reference to a prefab that manages a single Node property. The `PropertiesContent` field stores a reference to the desired parent of these elements. The `DisplayInternalProperties` toggle determines whether or not the internal (uneditable) properties of a Node should be shown in the list.
The `InteractableElements` list defines a list of buttons/interactable objects that change interactivity based on whether a Node, NodeGroup, or trashed Node is selected. Each element allows you to choose an interactble UI element and set the conditions under which the interactable will be enabled.
The `ClosedState` and `OpenState` define the size and position of the menu, which is used whenever the `View Properties` button is pressed; whenever the Properties Menu is shown, the size of the Node Menu increases (i.e. opens) so that more properties are visible.
Finally, the `SelectedOutlineState` field allows you to change what the outline around the selected object will look like (or disable it entirely).
To show the Node Menu, use:
```csharp=
// For a NodeObject
UiManager.ShowNodeMenuForNode(nodeObject);
// For a NodeGroupObject
UiManager.ShowNodeMenuForNodeGroup(nodeGroupObject);
```
## File Explorer
The `FileExplorerMenu` class derives the `UiListMenu` and thus contains the same properties as the `MenuPanel` and `UiListMenu`. However, there are a few extra fields.
The `FileExplorerScrollView` field stores a reference to the `ScrollView` component within the menu, which is used to set the scroll amount whenever moving directories. Similarly, the `FileExplorerItemToggleGroup` stores a reference to the `ToggleGroup` component within the menu, which is used to determine which file or folder is currently selected (since Toggle Groups only allow a single Toggle to be selected).
The `FolderIcon` and `FolderIconTint` fields allow the user to specify a certain image and tint to be used to designate a folder. Similarly, the `FileIcon` and `FileIconTint` fields allow the user to specify the image and tint to be used to designate a file.
Lastly, the `CurrentPathLabel` field stores a reference to a text object that is updated with the current directory.
To show the File Explorer Menu from code, you can simply use:
```csharp=
// To show the File Explorer for a specific path:
var path = "C:/some/random/path";
UiManager.ShowFileExplorer(path);
// To show the File Explorer for the last opened path:
UiManager.ShowFileExplorer();
```
There is also an equivalent instance method that can be added to callbacks within the inspector called `UiManager.ShowFileExplorerUI()`.
## Search Menu
The `SearchMenu` class derives the `UiListMenu` class and thus contains the same properties as the `MenuPanel` and `UiListMenu`.
It also contains a single other field; the `SearchField` field stores a reference to the `KeyboardSelectableInputField` object where the search terms will be written to. The contents of this field are what is used to search using the `PhysicalSearchForNodes.Search()` method.
(Please note: this would eventually allow for customizable outlines, as in the `GestureManager` and `NodeMenu` classes, but due to time constraints, this was unfortunately left unfinished.)
## Trash Menu
The `TrashMenu` class is functionally very similar to the `SearchMenu` and `NodeCollectionMenu` classes, except that the `TrashMenu` contains three extra fields: the `TrashLabel` field (which stores a reference to the title text of the menu so that it can be updated with the number of Nodes present in the trash), the `RestoreAnchor` field (which stores a reference to the `Transform` that Nodes will reappear at whenever restored), and the `TrashParent` field (which defines where Nodes are physically stored whenever sent to the trash and disabled).
To send a Node to the trash, you could simply call:
```csharp=
UiManager.AddNodeToTrash(nodeObject);
```
Or, to check if a Node is in the trash, you could check:
```csharp=
bool isInTrash = UiManager.TrashMenuInstance.Contains(nodeObject);
```
To cleanup all invalid nodes in the scene, you can call:
```csharp=
UiManager.SendInvalidNodesToTrash();
```
## Progress Bar
The `ProgressBar` class inherits the `MenuPanel` class, and thus contains the same fields. It also has a few extra fields.
The `LoadingText` and `ProgressText` fields store references to text objects that will display the loading text (e.g. "Loading Nodes...") and the progress text (e.g. "80/120" or "58.3%").
The `ProgressBar` field stores a reference to the progress bar itself (i.e. the colored bar that fills up as more progress is made). This is simply a RectTransform whose width is modified as the progress changes.
The `ProgressDisplayType` field determines whether or not the progress is shown as a percentage (e.g. "58%") or a fraction (e.g. "80/120").
The `HideOnComplete` field determines whether or not the Progress Bar will automatically be hidden whenever the loading is complete.
To create a Progress Bar, you will need a source for both the current progress and the total number of items that must be processed. An example is given below:
```csharp=
using UiManagement;
using UnityEngine;
namespace Example {
public class ExampleProgressBar : MonoBehaviour {
[SerializeField]
private int _totalCount = 2500;
private int _currentCount = 0;
private void Start() {
// Create a Progress Bar to display the progress.
// Must provide a string description,
// a Func<int> to obtain the current count,
// and a Func<int> to obtain the total count
// 'BindOneShot()' is inspired by Unity's AudioSource.PlayOneShot()
// and manages an instance of a ProgressBar, automatically
// destroying it when the bar is no longer needed.
ProgressBar.BindOneShot("Counting frames...",
() => _currentCount,
() => _totalCount);
}
private void Update() {
if (_currentCount < _totalCount) {
_totalCount++;
}
}
}
}
```
## Keyboard
The `PhysicalKeyboard` class does not inherit `MenuPanel` (since it is a collection of 3D objects instead of 2D UI objects), but it does contain the 3D equivalents of the same fields.
Additionally, the `KeypressSounds` field contains a list of AudioClips that will be chosen from at random whenever a key is pressed on the virtual keyboard.
The `BoundInputField` field stores a reference to the Input Field that the keyboard is currently editing. Because this field is serialized, it is possible to have instances of the keyboard that are present in the scene that modify only a single field (and not managed by the `UiManager` class).
Each child of the `Physical Keyboard` prefab represents a key on the keyboard. Each key has a `BoxCollider` component (for interactions) and a `PhysicalKeyboardKey` component.
The `PhysicalKeyboardKey` defines the type of key that is represented in the `KeyType` field (which can be `Character`, `Space`, `Backspace`, `Enter`, `Shift`, or `Close`). The `LowercaseValue` and `CapitalValue` fields define the characters (or strings) that a `Character` key represents depending on the `Shift` state of the keyboard. Finally, each key also contains the `KeyLabel` field, which stores a reference to the text object that will be updated with the character's value (i.e. so that the labels update whenever the `Shift` key is pressed).
If you would like to manually bind the keyboard to an `InputField`, you can call:
```csharp=
UiManager.BindKeyboardToInputField(inputField);
```
Alternatively, you can simply replace the `InputField` with an instance of `KeyboardSelectableInputField`, which will automatically bind the keyboard to itself whenever it is selected.
Lastly, if you would like, you can listen to keypresses by adding listeners to the `PhysicalKeyboard.OnPress` event. Each key emits a `PhysicalKeypressEventArgs` containing the type of key (i.e. `Character`, `Space`, `Enter`, etc.), as well as the value of the key. A short example is given below:
```csharp=
using UiManagement;
using UnityEngine;
namespace Example {
public class ExampleKeyboardListener : MonoBehaviour {
private void Start() {
UiManager.PhysicalKeyboardInstance.OnPress += PrintKey;
}
private void PrintKey(object sender, PhysicalKeyPressEventArgs keyPress) {
if (keyPress.KeyType == PhysicalKeyboardKey.PhysicalKeyType.Character) {
Debug.Log(keyPress.KeyValD:\Belgelerim\GitProjects\OntologyS2021\docfx_project\features\toc.ymlue);
} else {
Debug.Log("Special key pressed!");
}
}
}
}
```
It is also possible to listen to keypresses *only* while the currently selected `InputField` stays focused. To do so, simply listen to `PhysicalKeyboard.OnCurrentFocusKeyPress` instead.