# Wildmeshing Hackathon Tasks There are a large number of tasks that have been discussed for post SIGGRAPH Asia / for the hackathon and this document exists as a dumping ground for these topics. This doc will be used to discuss / itemize tasks and a bug tracker will be used to sort and assign tasks. # Characteristics # Main Goals - Unified 2D/3D executor and operations [Michael,Denis] - New data structure with explicit topology + automatic attribute transfer [Daniele] - Merge Applications + Python Wrapper [Teseo,Jiacheng,Daniel] [Link]( https://hackmd.io/SFUw_d2QSKapDTaelkZPrA?both) ## Unified 2D/3D executor and operations The atomic topological operations of WMTK, their resulting interfaces, and their execution need to be updated to improve thread safety. To minimize burden to the developer we should do our best to * guarantee topological correctness of our operations * minimize concurrency worries * ease the extension of operations for application specific tasks. ### Guaranteeing topological correctness To guarantee correctness we want to stick to two atomic topological operations that we can carefully set via unit tests: split and collapse. All other operations either preserve topological state or need to be written in terms of these two operations (or with unit tests to show equivalence to a reference implementation in terms of those two operations). ### Minimizing concurrency worries The execution system and mesh storage system allow for operations to declare temporary ownership over subsets of the mesh geometry. Once ownership is declared each thread should have access to its subset of the mesh (including attributes) without any explicit prescription of multi-threading concepts. In other words, parallelism should be managed by the executor with some data structures in the base mesh class and each operator should be _completely_ agnostic to the potentially parallel nature of the system's execution. Furthermore operators should be treated as ethereal objects - we must disallow state to from being passed between executions of an operation. ### Simplifying extensions Rather than overload existing functionality we should use a compositional structure to encode how state is manipulated. In our existing code we often need to call _all_ overrides of a given virtual function, which is currently implemented in a haphazard manual process. Instead we should define multiple individual operations and declare how they compose with one another. We also found ourselves using functions declared for "checking" as places for geometric manipulation. This was due to an implicit hierarchy where we saw topological updates as primary and geometric updates as secondary. Instead we should makes geometric updates the execution components of our operations ### Implementation #### Operator structure The only non-const aspect of each operator should be its execution and the before/after should be used to perform topological checks. ```cpp= class Operator { Operator(const Mesh& m, const Tuple& i): _mesh(m), _input(i) {} virtual bool before() const { return true; } virtual bool execute() = 0; virtual bool after() const { return true; } virtual std::vector<std::pair<std::string, Tuple>> renew() const { return {}; } virtual std::vector<Tuple> modified_simplices(int dim) const; private: Mesh& _mesh; const Tuple _input; }; ``` * If an operation wants to _optimize_ by caching some data in the before it is their responsible to specify state as mutable * Checks for state before/after the execution of an operation should be performed during the execution check. #### Snapshot-based access ##### Lazy Snapshots #### Composite operations Our two atomic operations will be used as building blocks for complex topological and geometric transformations. The key to utilizing these atomic operations will be the design of composite operations that execute ##### Sequential and Scoped Composite Operators ## Merge Applications + Python Wrapper High priority TODOs: - [ ] merge attributes of all applications in a single super struct - [ ] implement IO library that reads/writes any format and converts it into a standardized hdf5 format - [ ] write an example for the json specification Remaining TODOs: - [ ] implement application that executes a json specification - [ ] replace all direct access to attributes and replace them by generalized function calls # TODO - [ ] Sort items - [ ] Assign items ## Architectural Changes - [ ] Unified base mesh for points + potentially top level simplices - [ ] each simplex dimension has its own index set, half-faces are stored as face attributes on the next level up if desired - [ ] two key operations - split and collapse, with each composite operation defined as a chained operation - [ ] introduce a default attribute behavior during our two operations, let chained operations handle more involved behaviors - [x] Operations only return a boolean, handle re-populating queue - [ ] Renew belongs to each operation - [ ] Invariants belongs in the base class ## Structural Changes - [ ] create a class for "sorted vectors that act as sets" - currently they are all vectors and the sortedness of different qzuantities is implicit and unobvious - [ ] standardize use of unordered_map / have our own keys available ## Stylistic Changes - [ ] Select a naming convention for functions/classes/variables - [ ] Organize source files and namespaces to match - [ ] One class per file - [ ] Introduce smart-tuples (Tuple+mesh ref) - [ ] Use RAII constructs - [ ] Remove templates from external interfaces (ExecutePass) - [ ] Remove internally held structs (like VertexConnectivity/TriangleConnectivity in TriMesh) - [ ] Suppress warnings from external libraries - [ ] Supress our own warnings - [ ] Remove auto in untemplated code (particularly where return type is a primitive or simple POD type like Tuple) - [ ] Fix use of REQUIRE vs CHECK in Catch2 tests - [ ] Replace post increment by pre increment, e.g. `i++` by `++i` - [ ] Convert long lambdas to actual *documented* functions ## Multi-Mesh Methodologies There are a few factors that distinguish the two methodologies for multi-mesh support that we have thought of - Storing an operation map and maintaining it through topological operations - maintaining correctness - implementing navigation - storing attributes - Support of non-manifold geometry In the following sections we will discuss two approaches: explicit and implicit multi-mesh configurations. But before we begin let us review our current the representation and challenges of encoding our meshes: ### Meshes A mesh is an embedded homogeneous simplicial complex and our encoding is represented by an abstract, homogeneous simplicial complex and a series of attributes encoded on the simplices of the abstract simplicial complex. Although meshes are only "interesting" when we embed them with quantities like positional coordinates or texture coordinates, these are "merely" values stored as maps from all simplices of some dimension to a vector space (such as positions being per-vertex). The main challenge of evolving meshes is how to keep the abstract simplicial complex valid, and when we have multiple meshes the challenge grows to making sure that not only is each abstract simplicial complex valid, but that each simplicial complex is coherent with one another. Notationally let $S^k$ be a homogeneous simplicial complex where the top level simplices are of dimension $k$ and $S^k_j$ is the set of $j$-simplices within $S^k$. #### Mesh Coherence Say we have a collection of meshes with pair-wise correspondences between simplices of the same dimension (though this dimension may be different than the dimension of the simplicial complex). That is, for any two homogeneous simplicial complexes $S^k, \bar S^\ell$ where $k \leq \ell$, for every $k$-simplex $\omega^k \in S^k_k$ we have an injective correspondence $C$ there exists a $k$-simplex $C(\omega^k) \in \bar S^\ell_k$. We say that our collection of meshes are coherent if they form a simply connected set in the graph induced by a partial ordering of meshes. ##### Pairwise Coherence We cannot allow arbitrary injections of simplices to another - they must preserve the topology of two two meshes and so this section exists to specify the conditions on these injections. We can induce a partial ordering of homogeneous simplicial complexes according to which complex has a "weaker" connectivity than the other, where the connectivity is defined in terms of which top level simplices are connected to one another. We will encode this relationship with the neighbor map, $\mathcal{N}_{S^k}: S^k_j \rightarrow 2^{S^k_j}$, i.e, the map from a simplex to the simplices adjacent to it. If we let the $\partial: S^k_j \rightarrow S^k_{j-1}$ be the boundary map. One potential implementation of the neighbor map is to use the set of simplices that share a boundary face, i.e $$ \mathcal{N}_{S^k}(\omega^k) = \{ \hat \omega^k | \partial \hat \omega^k \cap \partial \omega^k \neq \emptyset, \forall \hat \omega^k \in S^k_j \} $$ (One respectively exists for $\bar S^\ell$). Note that the boundary map is just a map on an abstract simplicial complex, so simplices map to simplices without any concept of position or even embedding. We denote $S^k$ and $S^\ell$ as compatible if and only if our $$ \forall \omega^k \in S^k_k, C(\mathcal{N}_{S^k}(\omega^k)) \subset \mathcal{N}_{S^k}(C(\omega^k)) \land C(\mathcal{N}_{S^k}(\omega^k)) = C^{-1}(\mathcal{N}_{\bar S^k}(C(\omega_k)) \cap C(S^k)) $$ where $C$ on a set is just its pointwise push-forward by $C$, i.e $C(\{\omega^k_i\}_i) = \{C(\omega^k_i)\}_{i}$ over some index set $i$ AND $$ \forall \omega^k \in S^k_k, C(\mathcal{N}_{S^k}(\omega^k)) \subset \mathcal{N}_{S^k}(C(\omega^k)) \land C(\mathcal{N}_{S^k}(\omega^k)) = C^{-1}(\mathcal{N}_{\bar S^k}(C(\omega_k)) \cap C(S^k)) $$ That is, every neighborhood pushed to the "at least as large" mesh is the same as restricting the neighborhood to simplices in the image of $C$ and the neighborhood in the image isn't larger than the original neighborhood. Note that pairwise coherence is not transitive between heterogeneous dimensions of simplicial complexes. ### Explicit with Mapping One seemingly straightforward approach for handling multiple meshes is to explicitly storing each mesh structure as its own Mesh with a set of discrete representations of the correspondence $C$. Each mesh is assumed to be manifold and any case where we want to support "non-manifold" meshes requires a decomposition of hte non-manifold mesh into multiple manifold ones. #### Topology mapping Our mesh datastructure has two obvious choices for how to store the mapping: the indices of the simplices being mapped and tuples for storing each quantity. Because we want to perform navigation and topological updates to our multiple meshes simultaneously we have to preserve _oriented_ mappings between the top level simplices of each mesh and so we would like to store tuples. #### Maintaining correctness ##### Non-manifold decompositions ##### Guaranteeing all sub-complexes are manifold #### Implementing navigation ##### Projecting navigation #### Storing attributes The storage of attributes is where the multi-mesh methodology shines - the high level picture is that each mesh is allowed to have its own attributes. In particular, each mesh is responsible for its own attributes and users are responsible for accessing the right attributes from the right mesh. Conversely, if we have an attribute that only belongs to a subset of simplices, like parameters on the boundary of a mesh, we can generate a codimensional mesh with the boundary to only have to store attributes on the boundary mesh. Furthermore, as is the case with seamed/unseamed mesh pairs, when we have two meshes with the same number of top-level simplices but a weaker topology, then we can handle the larger number attributes for lower order simplices (like the extra vertices and edges in a seamed mesh). ### Implicit with tagging and views ## Pseudocode from Leyi? for explicit mesh navigation? Psudocode ```c++ // TODO: add support for 1-dimension meshes std::vector<Mesh> meshes; std::vector<Map> maps; std::vector<Tuple> get_all_edges(Mesh m, Tuple t) { std::vector<Tuple> ret; ret.push_back(t); if (m.is_trimesh()) { if (!m.is_bd(t)) { ret.push_back(m.switch_tuple(t, PF)); } } if (m.is_tetmesh()) { auto t_cur = t; while (!m.is_bd(t_cur)) { t_cur = m.switch_tuple(t_cur, PT); ret.push_back(t_cur); t_cur = m.switch_tuple(t_cur, PF); ret.push_back(t_cur); } t_cur = m.switch_tuple(t, PF); ret.push_back(t_cur); while (!m.is_bd(t_cur)) { t_cur = m.switch_tuple(t_cur, PT); ret.push_back(t_cur); t_cur = m.switch_tuple(t_cur, PF); ret.push_back(t_cur); } } return ret; } std::vector<Tuple> get_all_vertices(Mesh m, Tuple t) { std::vector<Tuple> ret; if (m.is_1dmesh()) { ret.push_back(t); if (!m.is_bd(t)) { ret.push_back(m.switch_tuple(t, PE)); } } if (m.is_trimesh()) { ret.push_back(t); auto t_cur = t; while (!m.is_bd(t_cur)) { t_cur = m.switch_tuple(t_cur, PF); ret.push_back(t_cur); t_cur = m.switch_tuple(t_cur, PE); ret.push_back(t_cur); } t_cur = m.switch_tuple(t, PE); ret.push_back(t_cur); while (!m.is_bd(t_cur)) { t_cur = m.switch_tuple(t_cur, PF); ret.push_back(t_cur); t_cur = m.switch_tuple(t_cur, PE); ret.push_back(t_cur); } } if (m.is_tetmesh()) { auto f_st = SC::star(t, m).get_faces(); for (auto f : f_st) { ret.push_back(f); ret.push_back(m.switch_tuple(f, PE)); if (!m.is_bd(f)) { auto f1 = m.switch_tuple(f, PT); ret.push_back(f1); ret.push_back(m.switch_tuple(f1, PE)); } } } return ret; } void split_edge(const Tuple &t) { [suc, new_ts] = split_edge(t) if (suc) { auto ts = get_all_edges(t); for (i) { auto sub_mesh = meshes[i]; auto SCs = SC_union(maps[i](ts), sub_mesh); for (auto op_t : SCs) { [suc, new_ts_submesh] = sub_mesh.split_edge(op_t); if (!suc) { return false; } update(maps[i], new_ts, new_ts_submesh); } } } return false; } // similar for collapse_edge bool extra_check_for_collapse(map) { auto es = get_all_edges(t); auto v1s = get_all_vertices(t); auto v2s = get_all_vertices(m.switch_tuple(t, PV)); if (!maps[es] && maps[v1s] && maps[v2s]) { return false; } } ```