# Checkpoints-2
Using these libraries:
```julia
using Distributed, BenchmarkTools, Test, InteractiveUtils
```
##======================================================
### Write code:
##### 1. Consider the following function which estimates π by “throwing darts”, i.e. randomly sampling (x,y) points in the interval [0.0, 1.0] and checking if they fall within the unit circle.
#### 2. Generate a million random numbers and add them. Parallelize the code using julia @distributed. Use @spawn, @spawnat and fetch the result
##### 2. Consider the double for loop in the lap2d!() function:
- Look again at the double for loop in the lap2d! function and think about how you could use SharedArrays.
- Create a new script where you import Distributed, SharedArrays and BenchmarkTools and define the lap2d! function
- Benchmark the original version:
Hint:
- Now create a new method for this function which accepts SharedArrays.
- Add worker processes with addprocs and benchmark your new method when passing in SharedArrays. Is there any performance gain?
- The overhead in managing the workers will probably far outweigh the parallelization benefit because the computation in the inner loop is very simple and fast.
- Try adding sleep(0.001) to the outermost loop to simulate the effect of a more demanding calculation, and rerun the benchmarking. Can you see a speedup now?
- Remember that you can remove worker processes with rmprocs(workers()).
```julia
function lap2d!(u, unew)
M, N = size(u)
for j in 2:N-1
for i in 2:M-1
@inbounds unew[i,j] = 0.25 * (u[i+1,j] + u[i-1,j] + u[i,j+1] + u[i,j-1])
end
end
end
```
##======================================================