Try   HackMD
tags: fh AR/VR

Virtual Worlds Questions

https://hackmd.io/@FH-SDJS/VR-AR-ExamQuestions

Which are three ways of modeling 3D objects in VR?

  • By Hand i.e. CAD Drawings | hand modeled: via appropriate tools like CAD software. This process CAN be supported by motion capture technology.
  • Procedural Modelling | proceduraly generated: requires some data to work off but is a very time saving technique for things that populate the background
  • 3D Scans of Real Objects: using depth-cameras, and volumetric methods or computertomography. This requires some additional steps:
    • conversion into a polygon model
    • filling up cracks/imperfections
    • simplifying geometry
    • texturing

What is Scenography and which are its typical elements?

Scenography is refering to the technique to build scenes with objects. A scenegraph describes the inner & outer picture of a virtual world.It is build by placing 3D objects in the graph which gets rendered at runtime. Depending on the kind of display that is used it gets rendered for a stereodisplay or multiprojectorsyste.

A scenegraph is a DIRECTED ACYCLIC GRAPH (DAG) which means the nodes are connected to each other with directed edges which describe the scene efficiently.

Typical elements are:

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Green=Visual/Auditiv
Red= Structural
click me for more

Which geometrical elements are used in VR and how they are represented in computer?

3D Objects are modeled using polygons. They consist of vertices and edges and most of the time a triangular shape is chosen. To represent a 3D object a data structures is required. They are also called surfacemodels There are 2 approaches for those:

  • Polygonnets
    • Image Not Showing Possible Reasons
      • The image file may be corrupted
      • The server hosting the image is unavailable
      • The image path is incorrect
      • The image format is not supported
      Learn More →
    • or Polygon Meshes consist of multiple polygone. They share vertices and edges to become one surface. This leads to two tables which summarize each and every one to describe the surface.
    • Image Not Showing Possible Reasons
      • The image file may be corrupted
      • The server hosting the image is unavailable
      • The image path is incorrect
      • The image format is not supported
      Learn More →
  • Triangle Strips
    • A more storage efficient, and faster calculation of polygon meshes. Just the first triangle of the mesh is explicitly specified. Every vertice produces then another triangle by using 2 of the previously defined vertices.
    • Image Not Showing Possible Reasons
      • The image file may be corrupted
      • The server hosting the image is unavailable
      • The image path is incorrect
      • The image format is not supported
      Learn More →
    • This means instead of 3 * N vertices in the case of N Triangles, only N+2 Vertices have to be defined.In a scenegraph objects are described as a set of trianglestips.

Using those data structures, more complex bodys can be built:

  • Solid-body models
    • Boundary representation (b-reps)
      • Modeled through the surface which encompasses the object. To determine via algorithms if an object is cohesive, Datastructures are required which hold informations about the topologie of the surface. Thats where the previously mentioned lists of vertices and edges come into play. Additionally it is required to differenciate between inside and outside surface, which is done by defining vertices and edged of the surface either clock- or counterclockwise. Through the order of the vertices a orthogonal vector is determined. This vector roughly defines the angle in which an observer is expecte to look at the object.
    • Primitive instancing
      • Instanciates primitive objects like cylinders, spheres, torus, or sometimes a little more complex ones like gears. Many scenegraph libraries support at least simple primitive objects.

How we mathematically move objects in VR?

Neben den Blattknoten und Wurzelknoten im Szenengraphen gibt es auch sogenante Transformationsgruppen.
Diese definieren ein eigenes (lokales) Koordinatensystem für ihre Kindknoten und sind mit
einer Transformationsmatrix versehen. Die von einem solchen Knoten festgelegte Transformation
beschreibt dann die Verschiebung, Drehung und Skalierung des lokalen Koordinatensystems
bzgl. des Koordinatensystems des übergeordneten Elternknotens. Um die
endgültige (globale) Position, Orientierung und Skalierung eines Objektes zu bestimmen,
muss der Pfad von der Wurzel des Szenengraphen zu dem entsprechenden Objekt traversiert
werden. Für alle auf dem Pfad auftretenden Transformationsknoten sind die entsprechenden
Transformationsmatrizen in der Reihenfolge des Pfades per Rechtsmultiplikation
miteinander zu verknüpfen. Die sich ergebende Matrix muss nun noch mit den Eckpunktkoordinaten
des Objektes multipliziert werden.

Ein Beispiel ist eine Szene, die ein Fahrzeug darstellt. Das Fahrzeug soll aus einem
Rumpf sowie aus vier damit verbundenen Rädern bestehen. Wird der Rumpf bewegt, müssen sich auch die Räder mitbewegen.
Ein Szenengraph, der diese Anforderungen realisiert,
ist hier zu sehen:

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

Ein Vorteil, den Szenengraphen bieten, ist dadurch begründet,
dass es sich um DAGs und nicht zwangsläufig um Baumstrukturen handelt. Somit können
Definitionen von 3D-Objekten sehr einfach wiederverwendet werden. So muss im
Fahrzeugbeispiel nur ein Radobjekt an Stelle von vier Radobjekten im Speicher gehalten
werden.
Neben den eigentlichen geometrischen 3D-Objekten enthält der Szenengraph i.(d.(R
weitere Elemente, wie Audioquellen, Lichtquellen und ein oder mehrere virtuelle Kameras
(oder Viewpoints). Objektivparameter wie der horizontale und vertikale Öffnungswinkel
(das sogenannte horizontale und vertikale Field of View) sowie Ausrichtung und Position
einer virtuellen Kamera bestimmen den sichtbaren Ausschnitt der Virtuellen Welt.
Eine weitere Möglichkeit besteht darin, ein Objekt im Koordinatensystem
eines anderen Objektes (dem Bezugsobjekt) darzustellen. Beispielsweise können so die
Eckpunktkoordinaten eines geometrischen Objektes in das Koordinatensystem der virtuellen
Kamera überführt werden. Dafür muss ein Pfad im Graphen vom Knoten des Bezugsobjektes
zum jeweiligen Objektknoten traversiert werden. Kanten dürfen dabei auch
in umgekehrter Richtung durchlaufen werden. Wie zuvor müssen auch hier die auf dem
Pfad auftretenden Transformationsmatrizen multipliziert werden, allerdings ist zu beachten,
dass mit der inversen Matrix zu multiplizieren ist, falls die entsprechende Transformationsgruppe
über eine Kante in umgekehrter Richtung erreicht wurde.
Als Beispiel soll die Transformationsmatrix MNagel!Rad 1 bestimmt werden, welche die
Objektkoordinaten des ersten Rades eines Fahrzeugs in das Koordinatensystem eines Nagels
überführt, der auf der Straße liegt (vgl. Abb.(3.2). Es ergibt sich folgende Matrixmultiplikation:

Image Not Showing Possible Reasons
  • The image file may be corrupted
  • The server hosting the image is unavailable
  • The image path is incorrect
  • The image format is not supported
Learn More →

How is physics modeled in game engines?

  • Physics such as position and orientation are calculated by a physics engine
    • This engine functions in a physics world, a parallel world to to the geometry world. Both worlds impact each other.
    • Collision calculations are also accounted by the engine
  • Position and orientation of the object are based on physical laws
    • The 3D rigid body's animation are affected by
      • Mass
      • Linear/angular speed
      • Material related damping parameters
      • Elasticity values
      • Global parameters (like gravity)
      • Initial forces & torques that act on the object at the beginning of the simulation
  • Since collisions can be difficult for engines to calcute, they us a so called collision proxy (called hitbox in video games)
    • These proxies can be spheres, cuboids or other shapes
    • Whether simpler or more precise collision proxies make sense, is highly application-dependent
    • It is important to note that the proxies are invisible but still affect the geometrical + physics world
  • Movements are only visible after rendering
  • Movement restrictions (constraints) are taken into account in the simulation of most physics engines.
  • In some cases the freedom of movement of bodies is restricted because they are connected to one another by joints.
    • Typical types of joints are s and swivel joints (hinges). For example, the connection between the upper and lower arm could be modeled as a swivel joint.
  • All of the above define how physics should be modeled in game engines

How are different materials rendered in game engines?

  • When rendering materials one needs to consider two things:
    • The material from which the object is made
    • The pattern/texture of the object surface
  • Emission, reflection and transparency properties give the material its appearance
    • One needs to note that transparent object need two stages of rendering since they let light pass through -> therefore are more affected by their surrounding
  • The physically based rendering (PBR) method allows one to render graphics in a way that accurately models the flow of light (of the real world)
  • Texturing is used to recreate surface structures (such as stone and wood) without having to model every detail geometrically
    • Rasterization is the principle of mapping a simple pattern across the triangles of a model
    • Mapping is used to render more realistic surfaces
      • Bump mapping
      • Normal mapping, variant of bump mapping
      • Displacement mapping, actually changes geometry of the object surface
  • When texturing is done one can add shaders to add depth to the object surface
    • This rendering is almost always executed by the GPU
    • The vertex and the fragment shader are the two most used ones
  • Applying and considering all the above properties/techniques allows one to render all different kinds of materials

Which three types of lightning game engines use?

  • Directional Light
    • light source is far away (infinite)
  • Point Light
    • lights like a light bulb spherical in all directions
  • Spot Light
    • lights like the point light but it is limited

How is audio modeled in game engines?

  • in scene-graph-systems sound is integrated in form of audio nodes
    • audio is emitted from the node (in spheres / spherical)
    • using binaural listening (stereo) the listener is able to locate the audio source
      • this requires 2 or more audio channels
      • the ear that is positioned near to the source has an increase volume

How are humans modeled and animated in game engines?

Johannes

  • To represent the body of a human a skeleton is used.
  • surface of objects (skin) is able to deform
  • when movement occurs the Object that form the body warps

  • animations are based on the skeleton
    • single bones are connected through joints
      • joints can rotate
      • movement with these joints allow more than the immediate neighbor objects to move
        • if you move the hand you arm moves too
        • movement of the knee also influences the foot
  • face bones are also used to simulate expressions in the face of the virtual human

Which technique is typically used to model fire, water and smoke?

  • A particle system
    • these types of models are non-solid (change over time)
    • every particle in a system is calculated by its physical properties and the forces of the environment
    • the source where new particles are added to the system are called "emitter"

      • emitters can emit in different patterns
        • push particles around the emitter
        • push particles in front of the emitter
    • every particle in the system is calculated in one cycle:

      • the particles exist as long as there lifespan isn’t over or leave a defined area
      • every pixel reacts to forces (gravity, wind, attenuation)
      • color and texture might also change

How are natural objects like hills, terrain and trees typically modeled?

  • procedural generation algorithms
    • example: generating a hill
      • algorithm makes sure that values do not differ too much from their neighbours
      • height of the surface is represented in a matrix grid
      • every point is represented with a value
        • every value represents the height of a specific area (you can see the grid on the first picture below the matrix)
      • this specific algorithm uses recursion to generate its surface
        • 1 values are put into the matrix
        • 2 at random values are slightly changed
        • 3 the missing values are generated (in relation to the neighboured values to avoid spikes)