# Infinite Resolution Textures
## Overview
### The problem
In modern graphics, especially in **real-time graphics**, 3D models are typically combined with rasterized 2D textures. This is done via **texture mapping**. But unlike the models, or 2D vector graphics, the rasterized textures introduce a resolution limit. They can provide a good level of detail when zooming out, through mipmapping, but when zooming in beyond their original resolution the results are blurry and silhouettes from the image are lost. This is the main problem of **texture magnification** which is addressed in this paper.
### Previous solutions
Previously vector graphics have been used even for 2D images. They are still used for offline solutions but are unsuitable for real-time application. **Silmaps** and **pinch maps** have been proposed to fix this issue for real time graphics, with a similar approach to the IRT proposed in this paper. However they come with a few limitations. Pinchmaps cannot have edge intersections in the silhouettes and the sample offset distance is limited. Silmaps, in contrast, can have edge intersections but with the cost of a more complicated sustom mipmapping scheme. These limitations are solved in IRT by using multiple silhouette edges per pixel and not limiting the magnitude of the UV-coordinate adjustment.
### Key idea

The key idea of the solution presented in this paper is to use a combination of 2D vector graphics and rasterized images for textures. For each texture, both a regular rasterized image and it’s silhouette map in vector form is needed. When sampling a texture close to a silhouette edge, the silhouette map is used to calculate an offset vector ($duv$) to move the sample away from the blurry edge part. This procedure uses **a single texture fetch** and it boils down to the following line of code:
```
float4 c = tex.SampleLevel( s, uv+duv, lod );
```
where the color $c$ is taken from sampler $s$, with level of detail $lod$ according to the adjusted coordinate $uv + duv$. Also, if there is no need for magnification, the rasterized image is used as-it-is. This is shown in the top right of figure 1.
### What they achieve
They achive a fast perfoming technique for creating more distinct egdes in blurry textures. The Béizer curves are about 30% slower at computing. The results of the technique can be seen in Figure 12.

## Questions
1. They claim that no additional visual details can appear from moving around the UV-samples, how can we be sure of this?
<!-- Their simple heuristic, moving the sample away from an edge using the sihouette maps normal seems "OK", but can we be sure this never creates new artefacts?
Depends on the edge detection algorithm, 1 pixel should theoretially never have 2 algorithms.
A bad edge detector could introduce artefacts-->
2. Can IRT lose small details? In figure 5 the eye looks crisp, but lacks eyelashes for example. If so, is there a solution to this?

<!-- It does not lose, nor create any detail which is not in the original texture. However, the higher the image resolution is the more we expect it to be realistic, including details which was not even in the original texture itself but that we, as humans, expect to be there (e.g. eyelashes). -->
3. Does the low costs of this technique make it viable for real time rendering?
<!-- In theory the computation of the displacement comes "for-free", but there is a potential issue in storing the silhouette map as memory bandwidth is already a limitng factor in GPU computing. -->
4. What kinds of images is this techinque good for? Look at figure 12.
<!-- If the image is too complex the silhouette map would take a lot of memory making this techinique less usable. -->
5. Do you think this technique will be seen in games in the future?
<!-- We don't know of any examples currently, but it seems like many games could benefit from this. Especially games with a bit of a "toon" style, like Borderlands. -->
###### tags: `chalmers`