# Running a local webserver with the big file to download
In the `downloader` practice we are told to download a big file using several processes working each on a different range of this big file. The file used as an example is hosted in a machine which IP is not reachable from Eduroam network. To overcome this, it is really easy to spin a webserver locally and use it to test our practice implementation.
Here are the step for doing it.
1) Generate a big sample file
From the practice we know the size and the name of the file:
```clike!
...
#define TARGET_URL "http://<IP>/lolo"
#define REMOTE_TARGET_SIZE_IN_BYTES 1047491658L
...
```
We can generate a similar random file doing:
```bash!
dd if=/dev/urandom of=randombigfile2 bs=522 count=2006689
```
Then, in the `downloader` folder we create a `html` folder. We will project this folder inside the container, right in the place where the default configuration of `nginx` will spec it to be, which is `/usr/share/nginx/html`
```
mkdir html
mv randombigfile2 html/lolo
```
Now we run a container with the `nginx` image doing:
```bash!
docker run -v ./html:/usr/share/nginx/html -p80:80 nginx
```
Finally, to test that this web server is running correctly we can use curl in the following way:
```
curl http://localhost/lolo -o newoutput
```
This `newoutput` should be exactly the same as `html/lolo` file. We can check this calculating a hash sum:
```
$ sum newoutput html/lolo
58748 1022942 newoutput
58748 1022942 html/lolo
```