## Attempt 5: Step back from bunny.obj
- cluster experiment
- eps = 1e-3
- bad news: bunny.obj turns out to be the most adaptive-benefit case so far among all the models we have tested.





## Attempt 4: discuss the rejection and acception of envelope method
- [ ] status for time checking rejection(is_outside)

- [ ] fix the sampling and rerun for bunny.obj 1e-3 **now only 10 times faster than the adaptive**

- [x] stats for gray area: how much queries are smaller than $\epsilon$ and larger than $\hat{\epsilon}$
> total number of envelope trig queries: 2103639
number of grey trig queries: 650448
number of certainly in queries: 1303364
number of certainly out queries:149827
> total number of envelope trig queries: 4078
number of grey trig queries: 12
number of certainly in queries: 3514
number of certainly out queries:552
- [ ] in exact envelope method, do we have any sampling speedup for rejection and acceptance, check the average speed time for exact method reject or accept a query
## Attempt 3 : combination of exact and sampling
#### Possible next directions:
1. improve sampling: classify the situations where inside is determined as outside -- false positive
2. exact method: performance stats collect -- whether negative decision is taking more time than negative ones
3. stochastic sampling method
#### Problems detected:
1. **Current sampling method is not conservative: it returns inside if not all points are larger than $\epsilon$ not $\hat{\epsilon} = \epsilon - d/\sqrt{3}$.**
- The necessity to maintain conservativeness
- Can non-conservativeness lead to consistency?
2. When grey points detected, we can shrink the sampling dist by 2, but then how to circle out the area not grey
#### A problem to check:
- [x] check why there is a difference between exact and sampling?
- test with a surface and a triangle right above eps with it
- From my test result, right now the exact method is actually testing a sqrt(3) times smaller envelope than sampling method :hushed: so presumably the adaptive simplified method can speed up 3 times which means it’s about 10 times slower to sampling now with the fixed envelope size
- conclusion: the sqrt 3 is reasonable due to the difference of two methods -- sampling takes sqrt 3 for conservativeness
### Performance of adaptive method
Still 25 times slower than sampling method

- [x] test with exact bunny off 1 eps: 50k vertices, time
- [x] test with different simplification methods - [QSLIM](https://github.com/wildmeshing/wildmeshing-toolkit/blob/4f147cde5b472b0182ce6a9d4e31d0041befa451/app/qslim/app/main.cpp#LL70C7-L70C71:35), [uniform](
https://github.com/wildmeshing/wildmeshing-toolkit/blob/4f147cde5b472b0182ce6a9d4e31d0041befa451/app/remeshing/app/main.cpp#L93)
- qslim/uniform not increasing the performance
- [x] test with different envelope size and files
With adaptive simplification, have to guarantee the mesh should always be indicated as inside the envelope, so $\beta - \alpha > 0$, and $\beta + \alpha = \epsilon$, so $\alpha \le \epsilon/2$ must be maintained, e.g. $\epsilon/2$ doesn't work with example bunny.off -> over refinement
Adaptive eps gives more space to adjust resulting in fewer iterations, quicker to converge, verified by experiments with bunnyoff, bunnyobj eps/3

- [x] testing on eps/5 to see whether we have a quicker convergence. **Concern:** at the end of the day, the convergence cannot be quicker than exact -- not working with eps/5
- [ ] to calculate adaptive eps, we measure triangle distance to a mesh, currently using sampling, not conservative, so measured adaptive eps might be larger than the real one, leading to original mesh point outside the envelope
- [ ] **target edge length is not universal with different env size, need to understand why in order to improve the performance since they are processing different number of vertices**
- [ ] mesh could be more simplified --> bunny.obj bottom part. right now for eps/3, the mesh is only simplified with less than 1/2 #v.
- [x] Zhongshi: sometimes, it help to smooth thickness out (conservatively average them a bit), otherwise the sharp transition would create troubles.
- [ ] cannot reason performance difference for bunnyoff and bunnyobj, how to measure performance is a big problem right now
- eps = 1e-3, bunny with 3k 30k using
#### Questions:
- does a k times simplified mesh lead to k times improvment of performance?
## Attempt 2: Adaptive simplification method
### algorithm rundown
1. pick initial $\alpha = \frac{\epsilon}{k}$
2. simplify mesh surface $S$ to $\hat{S}$ within $\alpha$
3. for any triangle $i$ in $\hat{S}$, compute $\beta_i = \epsilon - \max_{\hat{S}_i} d(\hat{S}_i,S)$.
$\hat{S}_i$ represents the points in side the triangle $i$.
- $\max_{\hat{S}_i} d(\hat{S}_i,S)$ will be obtained from sampling on triangle $i$.
4. tetwild using $\beta_i$ for each triangle.
Assertions: $\beta_i > (1-\frac{1}{k})\epsilon$, $\beta_i < \epsilon$
#### Experiments:
- [x] implement and count $\beta_i$ distribution, expecting 50% of $\beta_i$ are larger than $(1-\frac1k )\epsilon$.


- [ ] test: using 1e-4 with exact envelop time ~~ expecting 1.5k per iteration after #15