--- tags: Master, Simulation --- # Simulation homework 3 ## Exercise 1 For this exercise we made two versions, one using a rectangle as boundary, and one using the exponential distribution method. The empirical distribution is practically indistinguishable between the two versions and in both cases it follows the theoretical one. On the other hand the efficiencies are very different. **Rectangle boundary:** ![rect version](https://i.imgur.com/u2d06va.png) The best fitting rectangle is the one spanning from $-6$ to $6$ along the $X$ axis, and from $0$ to $\frac{1}{A}$ (where A = 1.8988) along the $Y$ one. The resulting empirical efficiency is $0.158$ **Exponential distribution boundary:** | ![exp bound](https://i.imgur.com/HY0RWeh.png) | ![exp hist](https://i.imgur.com/NTEJR69.png) | | -------- | -------- | We used an exponential distribution with $\lambda=0.5$. The resulting coefficient is $c = 2.188$. The left image shows that $c*expon(x, \lambda) \ge f_1(x)$, where $f_1(x) = 2*f(x)$. Since we needed to consider only the right side of the distribution, we amplified it by a factor of 2 to have a valid PDF. The resulting empirical efficiency is $0.456$ ## Exercise 2 **Average fractions of time spent in each node as a function of the number of state transitions:** ![](https://i.imgur.com/EQnUYWv.png) The average fraction of time spent in the node $\pi_1$ as a function of the number of state transitions corresponds to the red points in the graph above, while $\pi_2$ is represented in blue, $\pi_3$ in black, and $\pi_4$ in green. The horizontal lines in the graph are an indicator of the theorical average for each node $\pi$. | Fractions of time in each state | $\pi_1$ | $\pi_2$ | $\pi_3$ | $\pi_4$ | |:------------------------------- |:------- |:------- |:------- |:------- | | theorical values | 0.32 | 0.32 | 0.20 | 0.16 | | empirical result | 0.319 | 0.319 | 0.204 | 0.158 | **Average throughput as a function of the number of state transitions:** ![](https://i.imgur.com/UIs8Q68.png) The average throughput is $856\ \text{Mbit/s}$, and the mean CI at level 0.95 is $[ 853, 860]$. This is in line with the proportion of time in the states with the highest thorughput, which are $\pi_1$ and $\pi_2$. ## Exercise 3 **Probability of failure at destination against probability of error p for the two cases: red(r=2, N=2) and green(r=5, N=10). Compared to theoretical values (black and blue lines)** ![](https://i.imgur.com/CTWQPRh.png) The high number of nodes of the second case allows more redundancy, therefore even with p values as high as 0.7, the failure rate is still below 0.1. **Average number of successful nodes at each stage for every p: (r=2 and N=2), stages 1,2 are relatively red and blue** ![](https://i.imgur.com/BFOfsWW.png) **Average number of successful nodes at each stage for every p: (r=5 and N=10), stages 1-5 are relatively red, blue, black, green and magenta** ![](https://i.imgur.com/TkjCuY2.png) It is evident that the higher number of stages amplifies the either positive or negative effect given from p. The last stage is also a good representation of the final failure rate, given that it depends directly from that stage. It is also notable that there is a value where the average message at each stage stays stable (p=0.9 in the case of N=10, Stages=5) ## Exercise 4 **Results using parameters Npoints=1000 and Nreal=50** ![](https://i.imgur.com/2mx7mRV.png) **Results using parameters Npoints=10 and Nreal=1000** ![](https://i.imgur.com/bsQIDcW.png) **Results using parameters Npoints=1000 and Nreal=10** ![](https://i.imgur.com/HJ9zBav.png) Since the total number of realizations tested is $Npoints \cdot Nreal$, what really matters is that the number of points is high enough. In fact, with 10000 point draws and only 1 realization for each (that equals 10000 realizations in total), the results are the same. **Results using parameters Npoints=10000 and Nreal=1** ![](https://i.imgur.com/spjuxUs.png) Given that the realization coefficient and the distance between the points are independent variables, the only requirement is that both are sampled uniformly with the right amount. We can therefore modify the algorithm to use a single $Ntest$ variable that represents both the number of points and realizations. With $Ntest=10000$ the results are equally good as the previous image. ![](https://i.imgur.com/9CSegm3.png) Changing the areas modifies the distance distribution, therefore it modifies the final distribution as well. **Magenta: distance between areas of 40 (initial conditions)** **Green: distance between areas of 80** **Red: distance between areas of 0 (the two rectangles are adjacents)** ![](https://i.imgur.com/yNRqSby.png)