# Neural 3D Mesh Renderer
###### tags: `논문`
## Abstract
**Aim**
modeling the 3D world with 2D Image
using rendering neural network
**Problem**
traditionally : 2D > 3D polygon mesh
It is not good way for Rendering Neural network to use polygon mesh
Because conversion from a mesh to an image, or rendering provoke discrete operation called Rasterization which prevent back-propagation
**Sol**
approximate gradient for rasterization that enables the integration of rendering into neural networks
outperforms the existing voxel-based approach
## Introduction
Incorporating rendering into neural networks has a high potential for 3D understanding
What type of 3D representation is most appropriate for modeling the 3D world?
- voxels
- 3D extensions of pixels
- difficult to process high resolution
- regularly sampled from 3D space
- their memory efficiency is poor
-
- point clouds
- Set of 3D points
- Scalability is high
- irregular sampling
- Texture and lighting are difficult to apply
- point cloud do not have surfaces
- polygon meshes
- Consist of sets of vertices and surfaces
- Promising
- scalable and have surfaces
- compactness
>to represent a large triangle, a polygon mesh only requires
three vertices and one face, whereas voxels and point clouds require many sampling points over the face
- Suitability for geometric transformation
> rotation, translation, and scaling of objects are represented by simple operations on the vertices.
polygon meshes represent 3D shapes with a small number of parameters, the model size and dataset size for 3D understanding can be made smaller.
Therefore, use the polygon mesh as our 3D format
## Rendering Process
- projecting the vertices of a mesh onto the screen coordinate system
- differentiable
- generating an image through regular grid sampling(Rasterization)
- difficult to integrate because back-propagation is prevented by the discrete operation
- Solve through **approximate gradient for rendering peculiar to neural networks**, which facilitates end-to-end training of a system including rendering
## Neural Renderer
flow gradients into texture, lighting, and cameras as well as object shapes
- single-image 3D mesh reconstruction with silhouette image supervision
- gradient-based 3D mesh editing with 2D supervision
#### Contribution
- propose an approximate gradient for rendering of a mesh, which enables the integration of rendering into neural networks
- perform 3D mesh reconstruction from single images without 3D supervision
- perform gradient-based 3D mesh editing operations, such as 2D-to-3D style transfer and 3D DeepDream
## How 3D representations have been integrated into Neural Networks
1. 3D representations in neural networks
data structure of a polygon mesh is a complicated graph, it is difficult to integrate into neural networks
2. Single-image 3D reconstruction
3. Image editing via gradient descent
## Limitation
can not generate objects with various topologies
In order to overcome this limitation, it is necessary to generate the face-to-vertices relationship dynamically.
---
### Words
supervision
scalability
differentiable
[topologies](http://blog.altair.co.kr/32002)
> Topology란, Geometry의 ‘Surface가 어떠한 상태로 연결되어 있는가’를 나타내주는 것
>
[voxel vs polygon meshs](https://www.youtube.com/watch?v=fGMu5kDNKU8)
---
주재걸 gradient 강의 다시