--- robots: noindex, nofollow --- [toc] # AR Raycasting ## Helpfull links * https://docs.unity3d.com/Packages/com.unity.xr.arfoundation@4.0/manual/raycast-manager.html * https://learn.unity.com/tutorial/placing-and-manipulating-objects-in-ar#605103a5edbc2a6c32bf5662 ## Intro If you’d like to let the user place a virtual object in relation to a physical structure in the real world, you need to perform a raycast. You “shoot” a ray from the position of the finger tap into the perceived AR world. The raycast then tells you if and where this ray intersects with a trackable like a plane or another trackable. A traditional raycast only considers objects present in its physics system, which isn’t the case for AR Foundation trackables. Therefore, AR foundation comes with its own variant of raycasts. They support two modes: * **Single, one-time raycast**: useful for example to react to a user event and to see if the user tapped on a plane to place a new object. * **Persistent raycast**: it’s like performing the same raycast every frame. However, AR platforms can optimize this scenario and provide better results. A persistent raycast is also a trackable through the `ARRaycastManager`. Like when using images and planes, you need to add the corresponding manager script – in this case the `ARRaycastManager` – to the `ARSessionOrigin`. This time, there is no default prefab available for visualization. Therefore, create a new script and attach it to ARSessionOrigin as well. ## AR Raycasting & Object Placement When you use raycasting to perform object placement, you can both place an object on a plane, as well as on a feature point or any other trackable. Note that just placing the object in the Unity scene at the intersection point doesn’t anchor the object to the real world (yet!). This means that over time, its perceived position might slightly change, as it’s not attached to the physical object but instead placed in a virtual static coordinate system. # Code ```cs= public class ARSpawnableManager : MonoBehaviour { // The prefab to instantiate on touch. [SerializeField] private GameObject _prefabToPlace; // Cache ARRaycastManager GameObject from ARCoreSession private ARRaycastManager _raycastManager; // List for raycast hits is re-used by raycast manager private static readonly List<ARRaycastHit> Hits = new List<ARRaycastHit>(); void Awake() { _raycastManager = GetComponent<ARRaycastManager>(); } } ``` In the first section, we’re defining the member variables. So far, we need three: `_prefabToPlace` contains the 3D model we want to instantiate. Assign the transparent skull prefab through the Unity Editor. The `[SerializeField]` property ensures that it’s assignable from Unity, even though the variable is private. We cache the `ARRaycastManager` from the `ARCoreSession`, so that we don’t need to retrieve it every single time. Our script assumes that it’s placed on the same GameObject as the raycast manager, which is the `ARSessionOrigin`. In `Awake()`, we retrieve the class instance. The `Hits` list is reused by the raycast manager. To avoid instantiating and garbage collecting a list for every raycast, AR Foundation was designed in a way so that we provide an existing list object, which the raycast method then fills with all hits it found in the world. In the next section, let’s look at the `Update()` method. ```cs= void Update() { // Only consider single-finger touches that are beginning Touch touch; if (Input.touchCount < 1 || (touch = Input.GetTouch(0)).phase != TouchPhase.Began) { return; } // Perform AR raycast to any kind of trackable if (_raycastManager.Raycast(touch.position, Hits, TrackableType.AllTypes)) { // Raycast hits are sorted by distance, so the first one will be the closest hit. var hitPose = Hits[0].pose; // Instantiate the prefab at the given position // Note: the object is not anchored yet! Instantiate(_prefabToPlace, hitPose.position, hitPose.rotation); // Debug output what we actually hit Debug.Log($"Instantiated on: {Hits[0].hitType}"); } } ``` First, we check if a new touch event on the screen just began. If not, we immediately exit the `Update()` function again. Next, we perform the raycast. The function returns `true` if it hit any valid target. For parameters, only need to provide the position of the touch event (in screen coordinates) and the `Hits` list that will be filled with intersections of the ray and trackables. The last parameter lets you limit with what kind of trackables you want to interact with. In this case, it’s set to `TrackableType.AllTypes`, which includes both planes (and other trackables). You can of course limit this to what suits your scenario. In case a hit was detected, our list will be filled with hit positions. This could potentially be multiple hits. As they are sorted by distance, it usually makes sense to only handle the closest hit, as this is where the user will want to place the object. The pose property contains the position and rotation of the trackable surface that the ray hit. We can then directly forward this position information to Unity’s `Instantiate()` method, also providing the reference to the prefab for the 3D model to instantiate at that position. To check on what kind of trackable we placed the object, a `Debug.Log()` statement prints out the `hitType` property. Note that we have placed the 3D model at the Unity world position but have not anchored it to the AR world yet, so that it stays in place as the phone’s knowledge of the real world evolves.