# Augmented Reality
## 01 - Introduction
### Mixed Reality
1. **What is Mixed Reality?**
- Presenting both, real world and virtual world objects in a single display
- Umbrella term for:
- Augmented Reality
- Augmented Virtuality
- Virtual Reality
2. **What is Augmented Reality?**
- virtual enhancement of the real world
- user sees real world
- with virtual objects placed in real world
3. **What is Augmented Virtuality?**
- real enhancement of a virtual world
- real data is mapped on virtual objects (textures etc.)
- >50% of the world is computer generated
4. **What is Virtual Reality?**
- fully virtual world
- using physical laws/world in virtual environment
- intent is to immerse the user
5. **Describe Milgrams Reality-Virtuality Continuum**
- Range from real environment to a purely virtual environment
- everything in between is mixed reality
6. **What are the AR Characteristics?**
- AR enhances users perception and interaction of the real world
- uses:
- head mounted displays (HMD)
- head-up displays (HUD)
- smartphones
- not limited to display technologies
- integrates additional info in real world
- tangible interfaces
- intuitive interaction with real life objects
- mobile or portable
7. **Tell differences between AR and VR**
- Tracking: AR high accuracy/VR low okay
- Display Device: AR small view/VR wide field of view
- Scene Generation: AR minimal rendering ok/VR realistic images required
### Classification
8. **Describe the term EWK**
- Extent of World Knowledge
- Knowledge of the displayed world
- where, what, where and what
9. **Describe the difference between an unmodelled and modeled world**
- unmodelled world could contain images or objects
- modeled world contains info about object positions, viewpoint observation, user interaction
9. **What does the term RF mean?**
- Reproduction Fidelity
- describes the quality in which the display can reproduce the content
- simple wireframes to hi-fidelity 3D animations
- video to high res 3D
10. **What does the term EPM mean?**
- Extent of Presence Metaphor
- how much the user is intended to feel present
- different ways to display the environment
- Monitor to HMD
11. **Explain the Milgram-Weiser Continuum**
- Combination of Milgram and Weiser Continuum
- Ubicomp vs Monolithic
- Real vs Virtual Environment
- Mobile Gaming more virtual and ubicomp
### Structure of MR Systems
12. **Name the main issues of AR**
- Calibration
- Registration
- Synchronisation
- Tracking
- Visual coherence
- Real-time
- mobile requirements
- Processor power limited
13. **Name the main issues of VR**
- Latency
- Lag
- Degree and metaphor of interaction
- Navigation
- Cybersickness
13. **Name the components of the sensor feedback loop**
- **User** gives
- **Input** to the
- **Reality Engine**. The Engine makes changes and
- displays the results on a **display**
### Categorising Mixed Reality
14. **Name the different Categories of MR**
- Telepresence
- AR
- Desktop AR/VR
- Fishtank VR
- Immersive VR
15. **Explain Telepresence**
- display of the real world
- audio and video data
- places user in another real existing place
- eg. robot control
16. **Explain AR**
- real world enhancement with additional content
- HMD, smartphones,...
- position tracking required (and often an issue!)
17. **Explain Desktop AR/VR**
- PC based
- 3d-glasses in front of display
- cheap
18. **Explain Fishtank VR**
- Projection screens
- display on small display area and for more users
19. **Explain Immersive VR**
- Goal: Immersion
- CAVE/HMD/curved screen
- user as far as possible isolated from the real world
- multiuser possible
- = VR
- full virtual world
- Cybersickness!
### Different types of AR
20. **Name the different types of AR**
- Head-up displays (automotive)
- screen based (smartphone)
- SAR = Spatial Augmented Reality (Textile, Automotive)
- stationary (marketing, lego)
- HMD-based (training, maintenance, assembly lines)
- smart glasses (logistic, military)
- smart lenses (replace HMDs)
21. **Name examples for every field of AR in real life**
- archaeology: documentation, cultural heritage
- Medicine: training, visualisation
- Entertainment: Smartphones, consoles
- tourism: info visualisation
- marketing: ads, catalogues
- logistics: guidance systems
- manufacturing: manuals, instructions
### Challenges and Development
22. **Name most common problems and challenges**
- graphics rendering HW and SW
- Tracking techniques (changes of viewer position properly reflected by rendered graphics)
- Tracker calibration and registration tools (precision)
- display hardware (virtual to real world)
- computer processing hardware for AR
- interaction techniques
23. **Name current focuses in development**
- input devices and displays
- improving optical tracking technology
- research and evaluation of displays
- software
---
## 02 - Computer Graphics
### Introduction
24. **What are computer graphics?**
- graphics created using computers, the representation and manipulation of pictorial data
- opposite of computer vision
### Color Perception
25. **What is responsible for Color Perception?**
- different cones (RGB)
### Color Models
26. **Name the different Color Models**
- RGB
- CIE
- HSV
- HLS
- CMYK
### Graphics Pipeline
27. **What is the goal of the Graphics Pipeline?**
- displaying 2D projection in 3D scene
- subsequential stages transforming data frame stage to stage
- geometric and rendering processes involved
27. **What are the steps of the Graphics Pipeline?**
- Vertices to 3D geometric primitives
- Modelling and transformation
- Lightning
- Clipping
- Rasterization
- Texturing and shading
28. **Name the basic primitive objects**
- points
- lines
- triangles
29. **Out of what are all 3D objects constructed?**
- mesh of triangles
29. **What do triangles typically have?**
- 3 Vertices interconnected by edges
- 1 normal vector indicating the front side
- Additionally normal vectors for vertices exist
28. **Name the different 3D primitives**
- Sphere
- box
- cylinders
- torus (Donut shaped form)
29. **Explain what happens in the modeling and transformation stage**
- primitives are transformed from the local coordinate system to the world coordinate system
30. **What is a transformation?**
- Scaling
- rotation
- translation
- as a 4x4 matrix
31. **What coordinate systems exist for 2d and 3d?**
- 2D
- Screen Coordinate System
- Viewport Coordinate System
- 3D
- World Coordinate System
- Object Coordinate System
- Left vs Right handed system
31. **What is a Frustum and what is Frustum Culling?**
- Is a volume where its contents are projected for rendering
- Has a near and a far clipping plane
- Culling means everything outside the area of the frustum is not rendered
31. **How do changes of FOV and aspect ratio affect an image?**
- High vertical/horizontal FOV make things smaller
- Different aspect ratios enlargen things on the corresponding side
32. **Which different type of lightning models exist?**
- Global lightning
- Local lightning (indirect)
33. **What different type of light sources exist?**
- directional light
- spot light
- point light
### Shading
34. **Name the different types of shading and explain it in short terms**
- Flat shading: norm-vector + color value (once per frame)
- Gouraud shading (intensity interpolation shading): Vertex + color interpolation
- Phong shading: Vertex normal vectors + color interpolation per pixel basis
### Texturing
35. **How can we change the surface of a 3D model?**
- Applying images on the 3D models/objects
- coordinates of texture are mapped onto the object
### Z-Buffer
36. **What is a Z-Buffer?**
- = stores distance from image plane on a per pixel basis
- expands between near and far clipping plane
- nonlinear
- higher precision for pixels closer to near plane
37. **Another name for Z-Buffer**
- Depth buffer
37. **What is Z-Buffer fighting?**
- Z-Buffer fighting is caused by close by polygons
- Usually results in flickering of the two objects where the intersection is
### Rasterization
38. **What happens in the rasterization step?**
- = scan conversion
- 3D model of the scene is rasterized and a 2D image is generated
- Colors from shading are evaluated
- Textures applied
- surface in front based on Z-Buffer are drawn
### High Dynamic Range
39. **What is HDR?**
- mapping the full color range
- for displaying on a screen not capable of showing the full color range
- = Tone mapping
### Basic Optimization
42. **Name different types of optimization operations**
- Anti-aliasing (removing sampling artifacts
- level of detail (less detail, better performance), can be done beforehand as dynamical processing is intense and error prone
- MIP Mapping (level of detail applied on textures)
- Billboards (simple representations of complex geometric objects)
### Scene Graphs
42. **What is a scene graph?**
- Collection of nodes in a graph or tree structure
- Node may have many children but only a single parent
- effect/changes of a parent is applied to all its child nodes
- Leaf nodes contain geometry
### Culling Techniques
42. **What are the most common culling techniques?**
- Frustum culling
- Backface culling (typ. biggest performance increase)
- Occlusion culling
- Contribution culling
43. **How can Frustum Culling be improved?**
- By using bounding volumes instead of testing each primitive type
44. **How does Backface Culling work?**
- faces where the face normal points away from the camera are not rendered
---
## 03 - Application Areas and Application Development
45. **Name industries where AR can become a use case or change the daily work**
- Medicine
- Military
- Product Presentation
- Manufacturing and Maintenance
- Logistics
- Manuals
- Automotive
- Annotatins
- Architecture
- Art and Entertainment
45. **Name examples for medicine use cases**
- Vein Visualisation
- Diagnosis and treatment
- Virtual Liver Surgery Planning System
- Training of staff
45. **Name examples for military use cases**
- simulations
- training
- Helmet mounted sights
- X-Ray vision of buildings
46. **Name examples for product presentation use cases**
- lego
- cars
- future products
- magazines
- games
- catalogues
47. **Name examples for manufacturing and maintenance use cases**
- error reduction
- time reduction
- remote support
- manuals
- assembly step tutorials
- training
48. **Name examples for logistics use cases**
- delivery
- communication
- picking
- quality assurance
49. **Name examples for automotive use cases**
- HUDs
- motorcycles
50. **Name examples for annotation use cases**
- Info of real life objects
- museums
- geotagging
51. **Name examples for architecture use cases**
- collaborative design platform
- collaborative design platform protocol
- virtual reality
- augmented reality
52. **Name examples for art and entertainment**
- virtual studios
- special effects
- facade AR
- westworld experience
- reality fighters
- pokemon go
### Application Development
53. **Name the steps in the app dev process**
- planning
- development
- testing
- evaluation
54. **On what should you focus on especially from the user point of view?**
- Navigation
- Interaction
- user feedback
- beware of the target audience
---
## 04 - Tracking Foundations and Optical Tracking
55. **What is tracking?**
- dynamic determination of spatial properties at runtime
- measure position and orientation of a physical object in real life
- for consistency between real world and virtual world
56. **What is tracking in AR used for?**
- display of augmentations
- interaction
57. **What can be tracked in AR applications?**
- objects
- features
- camera position
58. **What can be tracked in VR applications?**
- head
- hand and fingers
- eyes
- torso
59. **What must an AR app address/fulfill?**
- tracking
- calibration
- registration
### Characteristics of Tracking Technology
60. **Name the characteristics of tracking technology**
- Physical phenomena
- measure light, sound, gravity, ...
- Measurement principle
- signal strength, direction, time of flight
- Measured geometric property
- distances, angles
- Sensor arrangement
- sensor fusion (multisensors), sensor sync
- Signal resources
- Passive: magnetic field, light
- active: acoustic, optic, radio wave
- Measurement coordinates
- global vs. local, absolute vs. relative
- Degrees of freedom
- DOF; 3 translations (XYZ), 3 rotations (head, pitch, roll)
- workspace coverage
- working volume of tracking system 1m^2 to global eg. GPS
- spatial scan
- outside-In (Sensors outside the tracked device) vs Inside-Out (Tracked device contains sensors)
- static accuracy
- position and orientation tracking of a fixed object
- dynamic accuracy
- measurement of a moving object
- tracking latencies
- length of tracking, fixed and variable latencies, prediction
61. **How can accuracy be increased?**
- Kalman filter
- Combination of prediction and correction
61. **What is the Kalman Filter?**
- Sensor fusion / learning algorithm
- Tries to get the exact state from a set of faulty set of measurements (often noise)
- Finds the most optimum averaging factor for each consequent state
62. **Difference between fixed and variable latencies?**
- fixed: sampling sensors, pose estimation
- variable: network communication, buffering, synchronization
### Registration
63. **What are the common AR registration issues?**
- static errors (viewpoint and environment remain still)
- Causes are Optical distortion, errors in tracking system and incorrect viewing parameters
- dynamic errors (viewpoint and/or environment move)
- Causes are Delays and lags
- jitter (even with markers, objects could move w/o movement of them happening)
63. **Solution for jitter?**
- If 3rd measurement is in range of 1st measurement, interpolation of 2nd could be performed by using 1st and 3rd measurement as base
### Tracking Systems
64. **Name different types of tracking systems**
- GPS
- Inertial tracking
- Inertial sensing
- Inertialsensing (accelerometer)
- accelerometer and gyro
- optical tracking
- infrared light
- beam scanning
- videometric
- inside-out
- outside-in
- pattern recognition
- optical tracking w matrix code
- spatial scan
- SLAM (Simultaneous Localisation and Mapping)
65. **Explain Inertial tracking**
- 2 components (accelerometer, gyro)
- advantages: no stationary tracking, large indoor areas
- disadvantages: positioning errors, orientation problems
66. **Explain Inertial sensing**
- Information about relative transformation of a target
- conserves axis of rotation
- mechanical gyro
67. **Explain Inertial sensing - accelerometer**
- measuring linear acceleration of an object
- measures force on a mass (example: Spring with mass)
68. **Explain Accelerometer and gyro in combination**
- acceleration and orientation measurement
- measurements always relative to last position
- Beware of accumulative error
69. **Explain GPS**
- Global Positioning System
- signals from three to four satellites
- triangulation of three spheres
70. **Explain Optical tracking**
- different types of markers (active, passive (reflect light))
- image processing is used
- landmark recognition
- inside out vs outside in
- Infrared light possible as well
71. **Explain Infrared use of tracking**
- IR camera to track markers
- errors can occur like light source collision or bright surroundings
72. **Explain Inside Out tracking**
- The tracked device itself contains the cameras or the sensory equipment required for tracking
### Spatial Scan
73. **Explain Videometric**
- tracking of a users helmet
- with mounted cameras at the ceiling
74. **Explain Beam Scanning**
- Scanning optical beams on a reference
- sensors on target detect time of sweep of the beams
75. **Explain Outside-In**
- Cameras or sensors are placed outside the tracked device
- High speed cameras
76. **Explain multiscopy**
- two or more cameras
- triangulation to get spatial position
- multi-object measurement = orientation detection
77. **Explain Pattern Recognition**
- one camera
- shape and size of object detected (known from the system)
78. **Explain Optical Tracking - Matrix Code**
- barcode
- matrix markers attached on RL object
- = object and pattern recognition
- measurement and position calculation via the markers and a camera
- Steps:
1. Binarisation
2. Connected component analysis
3. Coordinate System estimator
79. **What is SLAM?**
- Simultaneous Localisation and Mapping
- Idea: place a robot in a room and let the robot build a map of the room
- using map to calculate the robot's location in the room
79. **What is odometry?**
- The use of data from the movement of actuators to estimate change in position over time
80. **What parts are included in SLAM?**
- Landmark extraction (object recognition)
- Data association (matching observed landmarks from different scans)
- State estimation (stochastic map building (Kalman Filter used))
- State update (position estimation + displacement calculation)
- Landmark update
- Application areas (AR tracking, robots [iRobot])
---
## 05 - Mobile Augmented Reality and Head-Mounted Displays
81. **What are the components of a mobile AR system?**
- HW computational platform
- Display
- Tracking
- WIFI
- wearable input and interaction
- software
82. **Name devices used with AR**
- backpack setups
- smartphones
- tablets
- consoles
83. **What is used from a mobile device to track the device's location?**
- Communication technology
- WPAN (Bluetooth)
- WLAN
- WWAN (GSM and UMTS, LTE)
84. **Name the main challenges of mobile AR**
- limited computational resources
- size, weight
- battery size
- tracking and registration
- 3d graphics
- real time performance
- networking
85. **Name software for mobile AR**
- OpenGL
- ARKit
- ARCore
- mixare
- AndAR
86. **Name products of Head Mounted displays commonly known**
- HoloLens
- Google Glass
- Google CardBoard
- Samsung GearVR
- HTC Vive
- PlayStation VR
- Oculus Rift
86. **Name binocular cues**
- Accommodation, change of lens to bring object at certain depth in focus
- Vergence, movement of eyes to map an object on the corresponding side of the retina
- Binocular disparity, the difference in image location of an object seen by the left and right eye
86. **What is the Screen Door effect?**
- It looks like you’re viewing the world through a mesh screen, and is a result of the black, empty spaces between pixels when seen up close.
87. **How do optical see-through HMDs work?**
- mostly used in AR
- similar approach as Head up displays in cars
- real world light reduction
- blending virtual objects in
- optical blending requires no additional computing power
88. **How do video see-through HMDs work?**
- recording real world
- combining real with virtual world
- presenting on a "monitor"
- no depth info
- separate video streams for RL and virtual world
- resolution is limited
89. **Compare video and optical see-through hdms**
- Advantages of optical blending
- simple
- computational cheaper than video blending
- Field of view not a major issue
- distortion needs to be compensated
- display of real world is not degraded
- direct view of real world still visible in case of power outage
- no eye offset like on video blending
- Advantages of video blending
- fully obscured real world
- composition on pixel-by-pixel basis
- no semi-transparent systems
- easier to match brightness of RL and virtual objects
- RL and virtual view delays can be matched
- problem with brightness (washed out real world or virtual world)
90. **Compare smart devices, smart glasses and hdms**
- smart devices
- intuitive interaction
- have to be held
- mainly monoscopic
- smart glasses
- low weight
- reduced interaction possibilities
- very low FoV
- HDMs
- hands-free interaction
- encumbrance of the user
- low FoV
91. **What is a Field of View?**
- Field of View is the area the user can see through a HDM, AR Smartphone/Tablet or smart glass
91. **What different form of display technology exist?**
- Prism lenses
- WaveGuide Arrays
- Half silvered mirrors
---
## 06 - Spatial Augmented Reality and Tangible User Interfaces
92. **What is Spatial Augmented Reality?**
- Actual objects or real world surfaces are used as display area
92. **What is part of spatial AR?**
- Augmented Surfaces
- projection mapping
- industry applications
- virtual sandbox
93. **What do tangible user interfaces allow?**
- more users
- co-located interaction
- encumbering one user
94. **What can be used as user interface?**
- actual object surface
- RL world surface
95. **What are the advantages of tangible user interfaces and the real world displays?**
- display surfaces may not be just walls
- flexible projection config
- camera for calibration and tracking
- rendering using texture mapping hardware
- projector = fixed location
96. **Main problems with projector based tangible user interfaces**
- gemoetric correction
- color correction
- intensity correction
- display on arbitrary surfaces
### Augmented Surfaces
97. **What are Augmented surfaces?**
- using real world objects as extension for displaying information
- like table or wall
- interaction must be possible
98. **What does the term hyperdragging mean?**
- spatial dragging of virtual objects
- like drag'n' drop
- dragging from one display to another (other display is RL object)
- establishing a link between RL object and virtual object
99. **What components do we need for augmented surfaces?**
- Anchor cursor (link between RL object and virtual object)
- Object aura (area around physical object)
10. **What do we need and need to do to add virtual objects to a physical object?**
- A link between physical and virtual object is needed
- area around physical object is needed
- data attaching to physical object
- binding virtual object to aura of physical object
10. **What do we need for Augmented surface setups?**
- communication via WIFI
- projection on info table or wall
- identification of physical objects via markers and cam
- additional camera (tracking)
1. **What is projection mapping?**
- projection on surface or real life object like a house
- calibration is needed
- Complex calibrations corrections with help of projector camera (Procam)
- for advertisement or art projects
1. **Explain the term AR sandbox**
- kinect
- projector
- sandbox
- Depth images are captured
- heat map generation
1. **Name other augmented surface projects**
- Augmented climbing wall
- pico projectors
- mono-on-mono
- SixthSense and Wear Ur World
### Tangible User Interfaces
1. **What is a tangible user interface?**
- physical object representing an object in the virtual world
- may have physical controllers attached
- TUI provides access to virtual information
- how: intuitive physical manipulation
- try to integrate UI in surrounding world
1. **On what objects can a TUI be attached to?**
- solid
- liquid
- gases
1. **Name projects with TUI**
- TUISTER
- Toolstone
- marker based input
- iOrb
- magnifing lens
- Keyers
- ChairIO
- ShoeSoleSense
1. **How to build a TUI?**
- Potentiometer
- Arduino board
---
## 07 - Interaction
1. **What is interaction in general?**
- influence of two entities on each other
- interaction with environment occurs if user input is responded by the system
- action --> reaction
1. **What two types of interaction exist?**
- Abstract interaction (e.g. gesture using data gloves)
- natural interaction (direct connection; virtual object pick up)
1. **What other ways of interaction exist?**
- egocentric interaction
- exocentric interaction
1. **What are the differences between AR and traditional desktops?**
- no mouse or keyboard
- 2D input not ideal for 3D object manipulation
- multi-touch
- speech/gesture recognition
1. **What is marker based interaction?**
- use markers as pointing devices
- e.g. state change mechanisms
- visual representation by icons
- buttons/sliders
- Problems:
- The action which can be performed is not always obvious to the user
- Too many markers can be confusing
1. **How can marker based interaction be used?**
- shake
- rotation
- hiding/showing
1. **What can be visualized with marker based interaction?**
- Menus
- Arrows
- Symbols
- Texts
- Icons
1. **Tell the difference between manipulative and semaphoric gestures**
- manipulative: couple target with gesture
- semaphoric: symbolic/representation of static or dynamic gestures
### Motion Capturing
1. **What different types of motion capturing technologies exist?**
- markerless (Kinect, leap motion)
- simple markers (commercial systems)
- bodysuit (VR domain)
1. **What is Motion Capturing used for?**
- used for skeletal animation of virtual characters
1. **Explain hand tracking**
- Use of LEDs, reflectors or color markers (FTIP)
- data gloves
- collision detected by distance between finger position and virtual object
- multi-finger-detection: grabbing usable
- problem: detection of multiple hands
1. **Tell me something about gesture tracking**
- good for SAR and HMDs
- HoloLens, Meta, Leap Motion
- no defined starting point (problematic)
- Can lead to fatigue (weight of hand etc.)
1. **Explain head tracking**
- detection of nodding and head shaking
- knowledge about head position and orientation
1. **Explain body tracking**
- used for recognizing bowing down, sitting
- e.g. Kinect
### Gaze based interaction
1. **What is gaze based interaction?**
- common in VR
- Eye tracking
- gaze direction can be calculated by head transformation and eye tracking
1. **What is used if eye tracking not available?**
- approximation of gaze vector
### Speech based interaction
1. **Explain Speech based interaction**
- Good because hands-free
- complex sentences distract user from detailed scene
- avoid speech based interaction on heavy visual load
1. **Name the different types of speech based interaction**
- Commandos
- numerical input
- free recognition
1. **What approaches exist?**
- Commands + numerical input
- interpreting free recognition
- e.g. sphinx, julius
### Interaction Techniques
1. **Explain the Canonical 3D manipulation tasks**
- Selection –> the task of manually identifying an object
- Position –> the task of positioning an object from an initial to a final, terminal, position
- Rotation –> the task of rotating an object from an initial to a final orientation
1. **Explain interaction processes according to bowman**
- split up into three main tasks, Selection, Manipulation and Release
- main tasks consist of sub-tasks
- manipulation requires selection
- selection does not always lead to manipulation
- combine above mentioned techniques to create new ones
1. **Name the manipulation methods**
- Direct user control (interface gestures mimicking real world interaction
- physical user control (objects user can physically touch)
- virtual control (devices a user can virtually control)
1. **Which variables affect the manipulation?**
- distance to the object
- object size
- translation distance
- amount of rotation
- object density
1. **On what do manipulation techniques depend?**
- interaction tasks
- variables
1. **Explain the interaction technique Ray-casting**
- = selection technique
- ray cast from observer position in scene
- collision between ray and interaction object is calculated
- collision detection only in specific areas
- after selection give feedback to user
1. **Explain the interaction technique Aperture**
- = selection technique using hed and hand sensor
- selection cone is cast into scene
- depends on distance between hand and head size
1. **Explain the image plane techniques (an interaction technique)**
- use of hand, finger, head tracking
- Headcrusher (ray between head and center of an area)
- sticky finger (ray between head sensor and finger sensor)
- lifting palm (open palm below object)
- framing hands (palm and thumbs in corner of frame)
1. **Explain the hand tracking technique jog dial**
- fingertip recognition
- fingertip orientation tracking
1. **Explain the hand tracking technique jog dial + slider**
- fingertip recognition
- rotation
- slider recognition (sliding the fingertips)
1. **Explain the interaction technique "Studierstubes PIP (Personal Interaction Panel)"**
- interaction via tablet and pen tracked
- tablet/pen = tangible interface
- asymmetric interaction (dominant hand)
1. **Explain the interaction technique Pick-and-Drop**
- natural extension of drag and drop
- stylus used
- network communication
- switch objects from one device to another
### Visual Hints
1. **What components are used for visual hints?**
- tangible tooltips (on object recognition tooltip appears)
- tangible bubble (like tooltip but a speech bubble)
- ghosted hints (what action you can do with the object)
1. **What types of hints exist?**
- textual hints
- diagrammatic hints
- ghosted hints
- animated hints
- placing annotations in the environment
- Hedgehog labelling
### Menus
1. **How can a menu be represented?**
- textual
- graphically
- 3D Objects
- WIMP interfaces
1. **What are menus used for?**
- changing states
- adjusting scalar values
- choosing from objects/options
1. **How can a menu be placed?**
- world referenced
- object referenced
- head referenced
- body referenced
- device referenced
---
## 08 - Advanced Rendering
1. **What does advanced rendering do?**
- issue to map real world data in virtual world data together
- ideally in real time
- mapping virtual content correctly
1. **What needs to be taken into account for real world to virtual world mapping?**
- consistent lighting
- use of shadows
- realness factor of virtual experiences
- virtual people
1. **What two aspects of the above question can be taken care of in AR?**
- consistent lighting
- use of shadows
1. **Why can they be an issue?**
- just partial known environment
- occlusion between real and virtual environment
- lighting conditions need to match
### Stereoscopy
1. **What is stereoscopy vision?**
- 120° FoV
- binocular cues
1. **Name the binocular cues**
- Accommodation (change of lens to bring object in focus)
- vergence (movement of eyes to map object on the correspinding side of the retina)
- binocular disparity (objects with different distance from eye focus)
1. **Name the issues for binocular cues**
- accomodation-vergence conflict
- Focal plane is always on the display surface
- Accommodation is always set to get the focal plane sharp
- Vergence has to change due to disparity
- In real life accommodation and vergence always match
- When using a display they can’t match, unless the display provides different depth layers
- binocular rivalry ( perception alternates between different images presented to each eye)
- diplopia
- suppression
1. **Name the types of perspective projection**
- parallel (two cameras with off-set)
- toed-in (for each eye a camera, pointed towards single focal point)
- off-axis
1. **Name the techniques of rendering stereoscopic images**
- Temporal (Shutter glasses)
- spatial (HMD)
- color (Anaglyph)
- autostereoscopic displays (barrier / lenticular displays)
1. **Name the relevant topics for stereoscopy in AR**
- HMDs
- depth perception
- autosterescopic tablets
### Depth Perception
1. **What depth perception can occur?**
- oculomotor cues (binocular display)
- monocular (induced by motion)
1. **Name additional depth cues**
- Depth of field
- Motion parallax
- Occlusion
- Shadows
- Shading
1. **Name examples for monocular cues**
- relative size
- relative height
- familiar size
- texture gradient
1. **Name examples of motion induced monocular cues**
- deletion and accretion
- motion parallax
### Distance Perception
1. **Why are distances underestimated by users in a virtual environment?**
- measurement methods
- technical factors
- human factors
- environmental variables
- personal variables
### Visual Coherence
1. **What is a visual coherence?**
- consistent environment between virtual and real content
- seamless blending of virtual objects in real environment
- easier to handle with video based approach
2. **What are problems of Visual Coherence?**
- Occlusion
- Lighting and Shading
- Shadows
### Occlusion
1. **When does occlusion happen?**
- if virtual objects are in front of real objects
- if virtual objects are behind real objects
- one of the strongest depth cues and should be resolved
1. **What is one of the main challenge of AR?**
- correct occlusion between real and virtual objects
- because no info about real world coordinates available from the beginning
1. **Name approaches which solve this issue**
- modeling real world beforehand
- generating depth maps
- use of optical flow
- real world object detection and extraction
1. **What is phantom rendering?**
- phantom = virtual object represented as real object
- phantom rendered invisibly only the Z-Buffer is modified
1. **How is phantom rendering done?**
- draw image on color buffer
- disable writing to color buffer
- render phantoms on real scene in z-buffer
- enable writing to the color buffer
- draw virtual objects
1. **Explain the model free occlusion**
- use of depth sensors to generate depth map
- virtual content and real content merged in z-buffer
- real environment is not modeled beforehand
- can be freely modeled based on sensor input
- Approaches: special HW or assumptions
1. **What are the sources of inaccuracy?**
- phantom models not 100% accurate
- errors in static registration
- errors in dynamic registration
- correction can be done in image space (post processing)
1. **How can occlusion refinement be done?**
- edge detection
- Camera has edge detection
- AR application searches for edges
- Edge smoothing
- Polygon renderer displays rendered results
- Others compute optical flow between camera image and model
1. **What is probabilistic occlusion?**
- assumptions about the real world model
- tracking required for rough representation
- transparency increases from inner to outer models
### Lighting
1. **What is photometric registration?**
- consistent illumination between real and virtual objects
- knowledge of illumination of real environment is required
- remote light sources and local illumination are easier to compute
- consider reflection, refraction and shadows for rendering
1. **Explain image based lighting**
- HDR maps used
- representing illumination in physical units
- radiance & irradiance map
1. **What is a radiance map?**
- environment map representing light from the point of the observer
1. **What is an irradiance map?**
- environment map representing outgoing light after reflection
1. **Explain light probes**
- way to physically record radiance maps
1. **Explain the differences between active and passive light probes**
- Passive:
- reflective sphere placed to capture real world (300° FoV)
- determination of light sources
- Active:
- placing a camera in the scene
- record light conditions
- fisheye, 360° camera
1. **Explain the term offline light capturing**
- pre-processing light sources
- multiple images can be used to generate a radiance map
1. **When do we need photometric registration?**
- light probes too complicated
- offline capturing too complex
- instead using regular camera frames
1. **From where can we use data for photometric registration?**
- static images
- Help: user input
- specular reflections
- Reflection in human eye
- diffuse reflections
- Spherical harmonics
- Face detection with machine learning
- shadows
- Knowledge of casting geometry has to be known
- Has to be classified correctly
- outdoor
- Position and time for sun position
### Shadows
1. **What do shadows add to the scene?**
- level of realism to rendered image
- visual clues to determine spatial relationships
1. **Name the requirements for shadows**
- real objects have corresponding 3D geometry (phantom model)
- known light source
- 4 different configurations available
1. **How are shadows displayed?**
- shadow volumes (project a ray from the light source through each vertex of a shadow casting object into infinity)
- shadow maps (Scene is rendered from the view of the light source)
### Diminished Reality
1. **What does diminished reality mean?**
- taking objects out of the real world
- by superimposing graphics
1. **Which three problems address diminished reality?**
- determination of region of interest (dynamic set of pixels)
- observation of hidden areas (view of background behind the ROI has to be integrated)
- new content for removed content
1. **Tell me something about the region of intereset**
- Region of interest is a dynamic set pixels on the screen to be replaced
- It can be freely visible or partially or fully occluded
- User may outline ROI, use strokes, selection rectangle
- Object behind ROI can be used to define ROI
- ROI could also be an articulated object which has to be tracked to be removed
### Stylized Rendering
1. **What is stylized rendering?**
- emphasizing regions / aspects of the scene
- - can be used for visualisation of art projects
1. **Which types of emphasizing exist?**
- cartoon shading
- pencil sketch shading
- painterly rendering
---
## 09 - Collaborative Augmented Reality
1. **What is CSCW?**
- Computer supported cooperative work
1. **Name problems of CSCW**
- Disparity between people who work and the ones who get the benefit
- Breakdown of intuitive decision making
- seams (spatial, temporal functional constraints)
- major categories of seams:
- functional
- Discontinuities between different functional workspaces
- Seams between shared and personal space
- Force the user to change modes of operation
- cognitive
- disruption between existing and new work practices
- Abandoning acquired skills and learning new skills
- Forcing the user to learn new ways of working
- Often rejected if users have to change the way they work
1. **To minimize seams CSCW should support what?**
- support of existing tools
- support of existing working techniques
- visual/audio communication between participants
- Collaborators must be able to maintain eye contact and gaze awareness
- Users must be able to bring real world objects into the interface
1. **Which types of collaboration spaces exist?**
- interpersonal space (video conferences)
- shared workspace
1. **For collaborative AR - which setup possibilities do you know?**
- co-located (same physical space)
- geographically dislocated
1. **What are the issues with VR HW concerning CSCW?**
- images rendered for one viewpoint
- human has to be rendered as avatar
- low portability
1. **Advantages of AR in CSCW?**
- users can see each other's facial expression
- reference of real objects
- no full rendering of environment needed
- real users and virtual world on one display
1. **Name the five key advantages of collaborative AR environments compared to traditional VR setups**
- Virtuality
- augmentation
- cooperation
- independence
- individuality
1. **Name aspects of collaborative AR users?**
- typically co-located
- no issues with viewpoint control
- private and public space sharing has to be considered
1. **Name aspects of collaborative AV and VR users?**
- typically dis-located
- users immersed in VE
- viewpoint problem
- VE should replace real world
1. **Advantages of AR collaboration?**
- shared virtual content
- perception and sharing of stereoscopic 3D data
- seperation between public and private space
- inclusion of tangible, real-world objects
1. **Issues of AR collaboration?**
- small FOV
1. **Name different coll. AR Applications**
- TeamWorkStation
- Translucent overlay of live video images of computer screens, two cameras for faces, private + public space
- ClearBoard
- Projection of video stream of remote user directly on the display
- AR^2Hockey
- Mixed Reality Stage
- Projection of virtual characters and models in a real down-scaled environment
- Transvision
- Visualisation of CAD data in a shared environment
- cAR/PE!
- WearCom
- Collaborative Web Space
- 3D web browser, commands via voice
- blue-c
- Empathy Glasses
- VITA
- offsite visualisation of an archaeological dig, combination of various displays
- Display in WIM and life size
- AR Tennis
- Co-located communication via Bluetooth, multi modal
- CheckMate
- HoloPortation
1. **What are awareness cues of AR Collaboration?**
- users have to perceive remote users action
- view frustum visualisation
- gaze vector visualisation
- potential issue: visual clutter
1. **Name awareness cues of AR Collaboration?**
- Empathy Glass
1. **What are hybrid solutions?**
- simular issues to Networked Virtual Environments
- real objects hard to transfer into virtual space
- AR users represented in VR/ VR awareness parameter visible in AR
- General problem of awareness parameters is visual clutter
- prototypes make us of hand presentation and eye tracking
1. **Networked Virtual Environments?**
- Researches in:
- Scalability
- Responsiveness
- Collaboration
- Architectures
- Communication protocols
- Human factors
- Provide useful and interesting asset to coll. AR
1. **Design Guidelines for Collaboration in MR?**
- as much information of the remote environment
- 3D Mesh of environment
- updated 3D Mesh in realtime
- provide independent PoV
- as much Awareness cues as possible
- transmitting speech
- information of posture
- poiting of hand should be modelled
- cues for events outside due FOV
- possibility to turn awareness cues on and off
- usability and comfort
- comfortable interface
---
## 10 - Evaluation Basics
1. **Evaluation in general**
- important for app usability
- performed by experts
- three groups preferred (HCI experts, developers, users)
1. **What are typical problems for evaluation?**
- Evaluation and user studies are one of the main topics in the humanities community
- developers typically don't come from humanities community
- demand for guidelines is high
1. **What are typical approaches of evaluation?**
- expert studies
- empirical studies
- questionnaires during and after use
- observation of the test subjects during interaction
1. **When and what data should be collected for evaluation?**
- beforehand:
- goal of a study
- setup of tests
- number and nature of tasks to complete
- during evaluation
- completion time of tasks
- afterwards:
- statistical evaluation
- graphical evaluation
1. **What possible uses of psycho-physiological and neurophysiolocig measurements?**
- estimation of mental workload
- stress
- strain
- level of cognitive performance
- alertness
- arousal
1. **How can these measurements be performed?**
- Electroencephalography (EEG)
- Event-related potentials (ERP)
- Electrocardiography (ECG)
- Skin conductance response (SCR)
1. **Which statistical analysis methods/tools are used?**
- Arithmetic mean
- Median
- Frequency distribution
- T-test
- Bonferroni
- ANalysis Of VAriance
- Variance
- Standard Deviation
- Quantiles, quartiles and percentiles
- Displaying data: charts
1. **What data should be gathered and prepared for evaluation?**
- amount of participants
- gender
- age
- profession
- degree of experience
- forms
- schedule
- test plan
2. **How to evaluate MR apps?**
- investigate human activity
- participant observation
- in situ interviews
- user recordings (video)
- collaborative viewing of recordings of a group
- transcription
- use of the three design loops:
- Loop1:
- User requirements
- Technology opportunities
- Selection and evaluation of solutions
- Loop2:
- usability studies (not always needed)
- proof of concept
- use of virtual prototyping
- Loop3:
- ergonomic research
- general evaluation
1. **Examples of MR App evaluations?**
- Mixed Reality stage
- Lighthouse Trial
1. **What are the development domains of MR App evaluations?**
- Behavioural domain - user centered approach
- Interaction with the application
- Usability of the user interface
- Application content
- constructional domain
- Api development
- every domain has experts
1. **Whats the 4 step sequential analysis?**
- User task analysis
- Expert guidelines-based evaluation
- Formative user-centred evaluation
- Summative comparative evaluations
1. **Standards for Questionnaires?**
- NAS Task Load Index
- Subjective assessment tool to rate the perceived workload of the user
- System Usability Scale
1. **Aspects of evaluating with wearable computing applications using a HMD?**
- for outdoor applications
- main issue: supervisor is not able to see what the subject sees
- interconnection of HMD via WIFI to supervisor's laptop
1. **Problems with AR applications?**
- Latency
- Depth perception
- Adaptation
- Fatigue and eye strain