# Assignment 3 - Starting on your app (graded)
**Date**: 20/02/2022
**Group members participating**: Jakob Overgaard (201706812), Oilver Rask Schmahl (201805260), Tobias Fabrin Gade (201809466)
**Activity duration**:
## Goal
## Plan
## Results
**Exercise 1.1**
After exercise 1.1 our application is very basic. It features a Marker that is permanetly AR Raycasted onto the AR Trackable Planes:
<p align="center">
<img src="https://i.imgur.com/y2YGdrC.jpg" alt="drawing" width="200"/>
</p>
Code for displaying the Marker located in the Marker Script:
```C#
private void Update()
{
List<ARRaycastHit> hits = new List<ARRaycastHit>();
raycastManager.Raycast(Camera.main.ViewportToScreenPoint(new Vector2(0.5f, 0.5f)), hits, TrackableType.Planes);
if (hits.Count <= 0) return;
var transformMarker = transform;
transformMarker.position = hits[0].pose.position;
transformMarker.rotation = hits[0].pose.rotation;
}
```
A UI Button section at the bottom, with a button for placeing an object on the markers current location and a button for deleting all objects placed.
<p align="center">
<img src="https://i.imgur.com/yLB1T42.jpg" alt="drawing" width="200"/>
</p>
The buttons work simply by calling the two public methods SpawnObject() and RemoveObjects() inside the Object Adder Script. Their code looks like this:
```C#
public void SpawnObject()
{
Transform markerTransform = marker.transform;
Instantiate(
objectToSpawn,
markerTransform.position,
markerTransform.rotation,
objectsHolder.transform);
}
```
and
```C#
public void RemoveObjects()
{
foreach (Transform child in objectsHolder.transform)
{
GameObject.Destroy(child.gameObject);
}
}
```
**Exercise 2.1**
With exercise 2.1 we added the functionality to select objects placed within the world. This is done through a new Object Manipulator Script, which will be responsible for manipulation of objects.
The code for selecting an object uses Raycast to find objects that the user clicks on. When an object is selected it stores the object's transform inside *_selectedGameObject*. Aditionally, for visualizing the selection to the user, we have chosen to use the outline package, which allows us to add an outline to objects using shaders through the outline component. The code for this can be seen below *(without the update() function and touch detection shown, as it has been covered in previous assignments)*:
```C#
Ray ray = _mainCamera.ScreenPointToRay(touch.position);
RaycastHit hit;
if (Physics.Raycast(ray, out hit, 100))
{
if (_selectedGameObject != null)
{
Destroy(_selectedGameObject.gameObject.GetComponent<Outline>());
}
_selectedGameObject = hit.transform;
Outline outline = _selectedGameObject.gameObject.AddComponent<Outline>();
outline.OutlineMode = Outline.Mode.OutlineAll;
outline.OutlineColor = Color.white;
outline.OutlineWidth = 10f;
}
else
{
Destroy(_selectedGameObject.gameObject.GetComponent<Outline>());
_selectedGameObject = null;
}
```
The result can be seen in the image below:
| None Selected | Cube Selected |
|:-------------:|:-------------:|
|  |  |
**Bonus:** we decided to create a object menu in the top left cornor of the screen, for this exercise it only shows the name (disregard the button for now). This name is updated with the addition of the code seen below inside the update() function:
```C#
if (_selectedGameObject == null)
{
selectedText.text = "None";
}
else
{
selectedText.text = _selectedGameObject.name;
}
```
**Exercise 2.2**
For exercise 2.2 we added a new button to the object toolbar that appears when an object is selected. This was done by simply adding the new UI button element to a parent gameObject in the canvas, which we in the *ObjectManipulatorScript* toggle on and off when objects are selected and deselected. When the button is clicked, we call the DeleteObject() method inside the *ObjectManipulatorScript* which simply destroys the selected object. The code can be seen below:
Toggle:
```C#
if (_selectedGameObject == null)
{
selectedText.text = "None";
manipulatorUI.SetActive(false);
}
else
{
selectedText.text = _selectedGameObject.name;
manipulatorUI.SetActive(true);
}
```
DeleteObject():
```C#
public void DeleteObject()
{
Destroy(_selectedGameObject.gameObject);
}
```
Button:
| Object Tree | Delete Button |
|:-------------:|:-------------:|
|  |  |
**Exercise 3.1**
For exercise 3.1 we implemented movement for the objects. The movement is implemented using the phase.moved property under touch.phase, which then, when a game object is selected, casts an AR Raycast each time the finger moves and moves the object to that location if the ray hits the trackable planes.
The code for this can be seen below along with a gif showing the movement in the app.
```C#
if (_selectedGameObject == null) return;
if (touch.phase == TouchPhase.Moved)
{
List<ARRaycastHit> hits = new List<ARRaycastHit>();
arRaycastManager.Raycast(touch.position, hits, TrackableType.PlaneWithinPolygon);
if (hits.Count <= 0) return;
_selectedGameObject.position = hits[0].pose.position;
_selectedGameObject.rotation = hits[0].pose.rotation;
}
```
## insert gif!!
**Exercise 3.2**
For exercise 3.2 we implemented rotation of the object.
This was done by checking when two touches are registered on the screen and an object is selected; Then calculating the angle between the line going through the two touch points and the line going through the previous two touch points. We calculate the signed angle between these two lines using Unity's build-in function Vector2.SignedAngle().
The math is illustrated in the image below:
The code for the rotation can be seen below:
*Note: we have multiplied by a constant (2), because the rotation seemed just a bit too slow.*
```C#
if (Input.touchCount == 2 && _selectedGameObject != null)
{
Touch touch1 = Input.GetTouch(0);
Touch touch2 = Input.GetTouch(1);
float angle = Vector2.SignedAngle(touch2.position - touch1.position, touch2.deltaPosition - touch1.deltaPosition);
_selectedGameObject.Rotate(0, -angle * 2 * Time.deltaTime, 0);
}
```
Additionally, we need to also change the old movement code, to not update the objects y-rotation on movement - as it seems to not make sense that the objects rotation is reset if moved:
```C#
if (touch.phase == TouchPhase.Moved)
{
List<ARRaycastHit> hits = new List<ARRaycastHit>();
arRaycastManager.Raycast(touch.position, hits, TrackableType.PlaneWithinPolygon);
if (hits.Count <= 0) return;
_selectedGameObject.position = hits[0].pose.position;
var rotation = _selectedGameObject.rotation;
rotation = new Quaternion(hits[0].pose.rotation.x, rotation.y, hits[0].pose.rotation.z, rotation.w);
_selectedGameObject.rotation = rotation;
}
}
```
A gif of how the rotation looks in app can be seen below: