--- tags: Master, Simulation --- # Simulation homework 4 ## Exercise 1 After implementing a simple discrete-event simulator, as described in points _1-8_, we proceeded to run few simulations in order to show how the number of packets in the system varies over time. We later replaced the event queue implementation using a binary heap for better performance. **Average number of packets in the system:** ![](https://i.imgur.com/r5pEH3x.png) The plot above represents the average number of packets on the Y axis and time on the X axis. Each line represents a different simulation, and the black horizontal line shows the theoretical average value. The simulations, that were run with parameters $\lambda = 10$ and $\mu = 15$, show all a convergence towards the theoretical value of $2.0$, but some simulations get very close while others have a noticeable offset. This phenomenon is due the fact that, using the cumulative average, the initial bias of the simulation is carried over and it needs a lot of in-simulation time in order to reasonably reduce its weight. **Average number of packets in the system for different parameters:** ![](https://i.imgur.com/RIuUYlS.png) The three plots represent the average system load through time for different values of $\lambda \in \{1, 10, 14\}$. While the departure rate is fixed $\mu = 15$. When the arrival rate approaches the departure rate, the convergence becomes slower, since the system is very sensible to high spikes of activity that lead to a high bias throughout the simulation. After running several simulations we could calculate the average queue time and compare it to the theoretical value. Using $\lambda = 10$ $\mu = 15$ the theoretical value is $0.1333$, our result is $0.1332$ with CI at level $0.95$ $[ 0.1318 - 0.1345]$. Using those simulation we calculated the empirical distribution for the system load and the queue waiting time at different epochs. **Empirical distribution of the number of packets in the system:** ![](https://i.imgur.com/Rxn5BVu.png) **Empirical distribution of the queue waiting time:** ![](https://i.imgur.com/KmXahR4.png) In both cases the distributions get sufficiently close to the theoretical value at the end of the simulation. In this case a good cut-off point could be on time 30. ## Exercise 2 **Average number of packets in the system (2 servers):** ![](https://i.imgur.com/dFTRBwN.png) Switching from a M/M/1 queue (first exercise) to a M/M/2 queue while keeping the same parameters of $\lambda = 10$ and $\mu = 15$ did not result in any major changes regarding convergence behavior. The main difference is in the theoretical average value that the simulations tend to, which is less than half than in the previous case. **Average number of packets in the system for different parameters (2 servers):** ![](https://i.imgur.com/yrK59K6.png) The same goes when changing the parameters: using again $\lambda \in \{1, 10, 14\}$ and a fixed departure rate of $\mu = 15$, the convergence behavior is inalterated, but the convergence value is much lower than in the case of the single server. **Average number of packets in the system for different parameters (2 servers):** ![](https://i.imgur.com/N0bouJ1.png) Since using more servers, more packets can be processed at once, the $\lambda$ rate for incoming packets can be increased as much as $c*\lambda$, where $c$ is the number of servers. max_time=1000 c=2, 10/15 theor avg q time: 0.008333, empirical: 0.008286 with CI at 0.95: [ 0.008197 - 0.008375] c=2, 29/15 theor avg q time: 0.9503, empirical: 0.9102 with CI at 0.95: [ 0.8685 - 0.9519] c=5, 29/15 theor avg q time: 0.001147, empirical: 0.001149 with CI at 0.95: [ 0.001134 - 0.001164] **Empirical distribution of the number of packets in the system:** $c=2, \lambda = 10, \mu=15$ ![](https://i.imgur.com/KAJZIRS.png) **Empirical distribution of the queue waiting time:** ![](https://i.imgur.com/g25SzLs.png) X/Y means lambda X and mu Y **c=2, 29/15 very slow convergence** ![](https://i.imgur.com/3EenzoL.png) ![](https://i.imgur.com/zGjK2gc.png) **c=5, 29/15** ![](https://i.imgur.com/56ahwhH.png) ![](https://i.imgur.com/vW8HUc1.png) **c=5, 70/15** ![](https://i.imgur.com/oM2Eo0T.png) ![](https://i.imgur.com/F66R2Nd.png) **Number of packets served by each server in differnt cases:** With a short service time, using the same implementation as before, we see that the first few servers are are significantly more time under work. This makes having multiple servers fundamentally useless. A possible implementation that can help to bypass the problem is to chose the servers at random from a list of free servers. The histograms below show the difference between the two aproaches for some cases. **10/15** ![](https://i.imgur.com/4Pnh4ag.png) **29/15** ![](https://i.imgur.com/XO0f0Au.png) **29/15** ![](https://i.imgur.com/JBverqO.png) **70/15** ![](https://i.imgur.com/PJw7xXU.png) **29/15 with random server selection** ![](https://i.imgur.com/gLPJzb6.png) ## Exercise 3 **Average node speed through time for different speed intervals:** ![](https://i.imgur.com/z9YXcCk.jpg) Looking at the first graph, which shows the trend of the average speed through the simulation with speed interval $[0, 10] m/s$, immediatly we noticed the average speed steadily decrease over time. This can pe explained intituively: the speed at wich the nodes are moving is move at random when the waypoint is reached, this means that slow nodes may take a really long time to reach the waypoint or may never reach it within the simulation time. At the same time, fast nodes will reach the waypoint rapidly, increasing the chance of drawing a low speed the next iteration. At some point during the simulation, given enough time, every node in the system will move at a low speed. Observing the trend of the average speed in this case, we suspected that it may tend to zero. At each _measure speed_ event the current value is lower than the previous one and it seems it will decrease consistently. This may happen over a long period of time. This is relative only for the case where the lower bound is zero, to further prove this we run a simulation with a speed interval of $[0,5, 10] m/s$ (last image), that shows that even in this case the average speed converges. To resolve our doubts we searched for additionaly information of the topics. From [this](https://www.researchgate.net/publication/2562902_Random_Waypoint_Considered_Harmful) paper, that deeply discuss the phenomenon, we obtained a formula to compute the theoretical average speed, $\frac{V_{max} - V_{min}}{ln(V_{max}/V_{min})}$. From this we can finally see that when the lower bound of the speed is zero the denominator goes to infinite and the speed tends to zero consequently. In every graph the theoretical average is displayed by the red line. Finally we see that in both the in the second graph, which show the simulation with a speed interval of $[1, 10] m/s$, and in the last one, our computed average speed converges to the theoretical one.