# xv6 Rubric
## Build Instructions
#### Follow the following link to download and set up qemu: https://pdos.csail.mit.edu/6.S081/2020/tools.html. Contact us at the latest if you face any problems.
#### Copy & paste schedulertest.c from [here](https://drive.google.com/file/d/1yIBxNMoJbCeojJS3-Zxz9akjSs-4rlcp/view?usp=sharing) and put the file inside the user folder.
#### Add the line `$U/_schedulertest\` around line number 132 in the `Makefile`
#### If the code does not compile give ZERO for the whole assignemnt, it is too cumbersome to make them do everything in this assignment.
**Note: Always run ```make clean; make qemu``` before running xv6.**
## Specification 1: syscall tracing [15 marks]
First run the os using
```
make qemu
```
Then
```shell
strace 32 grep hello README
strace 2147483647 grep hello README
strace 2 usertests forkforkfork
```
### Expected output
1. First command `strace 32 grep hello README`
Things to note:
1. Only `read` is shown, else **zero** for output.
2. First argument is 3
3. Last argument is same as return value for first few iterations
4. Last return value is 0
**Marks: 3 for output, 2 for arguments**
```bash
3: syscall read (3 2744 1023) -> 1023
3: syscall read (3 2881 886) -> 886
3: syscall read (3 2917 850) -> 850
3: syscall read (3 2927 840) -> 840
3: syscall read (3 2830 937) -> 937
3: syscall read (3 2751 1016) -> 1016
3: syscall read (3 2974 793) -> 667
3: syscall read (3 2831 936) -> 0
```
2. Second Command `strace 2147483647 grep hello README`
Things to note:
1. All of `trace, exec, open, read, close` are shown. Even if one is missing then **zero** for output.
2. The `read` command arguments should be same as above command
**Marks: 3 for output, 2 for arguments**
```bash
5: syscall trace (2147483647) -> 0
5: syscall exec (12240 12208) -> 3
5: syscall open (12240 0) -> 3
5: syscall read (3 2744 1023) -> 1023
5: syscall read (3 2881 886) -> 886
5: syscall read (3 2917 850) -> 850
5: syscall read (3 2927 840) -> 840
5: syscall read (3 2830 937) -> 937
5: syscall read (3 2751 1016) -> 1016
5: syscall read (3 2974 793) -> 667
5: syscall read (3 2831 936) -> 0
5: syscall close (3) -> 0
```
3. Third Command `strace 2 usertests forkforkfork`
Things to note:
1. Output should be long.
2. This would take some time, so use this time to ask the students which schedulers has he/she done.
**Marks: 5 (0 for printing only 1-2 lines)**
```bash
test forkforkfork: 8: syscall fork -> 10
10: syscall fork -> 11
11: syscall fork -> 12
11: syscall fork -> 13
12: syscall fork -> 14
11: syscall fork -> 15
12: syscall fork -> 16
14: syscall fork -> 17
11: syscall fork -> 18
12: syscall fork -> 19
13: syscall fork -> 20
11: syscall fork -> 21
....
```
## Specification 2: Scheduling
### FCFS [10 marks]
First you would have to quit the session and build again
```bash=
make clean; make qemu CPUS=1 SCHEDULER=FCFS
```
Run the following command
```bash=
schedulertest
```
During command execution do ```control^p``` to check the status of the processes.
#### Expected output
Things to Note:
- Only one process should be running, others will be in runnable/sleep state.
- Processes should end in order, process 5 then process 6 and so on.
- Press Ctrl-P multiple times the currently running process should not change state (i.e should be run) before its completion.
**Marks: 0 if not working; else 10**
```bash
Process 5 finished
PID State rtime wtime nrun
1 sleep 0 266 10
2 sleep 0 265 10
4 sleep 0 111 5
6 run 17 94 1 <--
7 runble 0 111 0
8 runble 0 111 0
9 runble 0 111 0
Process 6 finished
Process 7 finished
Process 8 finished
PID State rtime wtime nrun
1 sleep 0 539 10
2 sleep 0 538 10
4 sleep 0 384 8
9 run 12 372 1 <--
Process 9 finished
```
### PBS [25 marks]
First you would have to quit the session and build again
```bash=
make clean; make qemu CPUS=1 SCHEDULER=PBS
```
Run the following command
```bash=
schedulertest &
```
After waiting for about 20 seconds, do ```control^p``` to check the status of the processes.
#### Expected output
Things to Note:
* There will be two kinds of processes one with ~85 priority (Type 1) & another with ~65 priority (Type 2).
```
PID Prio State rtime wtime nrun
1 55 sleep 0 3125 11
2 60 sleep 0 3123 9
4 60 sleep 0 3063 4
5 65 runble 254 2809 1883 <--
6 85 runble 250 2813 1826
7 65 run 254 2809 1883 <--
8 85 runble 254 2809 1825
9 65 runble 253 2810 1882 <--
```
* In the above output the first three processes are not important for us (mainly they are xv6 processes/ schedulertest process). Consider the remaining 5 processes only. After the processes have run for 5-10 seconds, do ```control^p``` to check for the following:
* The processes with lower priority (numerically, here 65) should have a higher number of nruns compared to processes having higher priority (numerically 85 here). If you find that the priorities are very close consider waiting 10 secs more.
**Marks:
Everything working fine: 12
If Dynamic priority not implement give only 6 marks**
Things to look in code:
* In case two or more processes have the same priority, see that they use the number of times the process has been scheduled to break the tie. If the tie remains, they use the start-time of the process to break the tie.
* Ask the student how he/she has updated the niceness and see it in the code.
$$
niceness = Int(\frac{Ticks \ spent \ in (sleeping) state}{Ticks \ spent \ in (running + sleeping) state} * 10)
$$
$$
DP = max(0, min(SP − niceness + 5, 100))
$$
**Marks: for each point marks 3 if implemented; total 6**
* After the processes have run so that there is a significant gap (around 10) in the nruns between processes with priority (~85 & ~65) run the following command:
```bash=
setpriority 12 8 # make sure it is a 85 process
```
* This will change the priority of process with pid 8 from ~85 to 12 (might be a little lower or higher due to dynamic priority).
* Let the processes run and check if the gap between nruns of process with priority ~12 and process with priority 65 keeps on decreasing. At one point the process with priority ~12 would overtake other processes in nturns value.
* This can take some time, so you can take the viva in this time, or if the difference is decreasing you can give marks.
**Marks for setpriority: 7**
### MLFQ [30 marks]
First you would have to quit the session and build again
```bash=
make clean; make qemu CPUS=1 SCHEDULER=MLFQ
```
Run the following command
```bash=
schedulertest
```
During command execution do ```control^p``` to check the status of the processes.
#### Expected output
**Initial condition**
```
PID Prio State rtime wtime nrun q0 q1 q2 q3 q4
1 0 sleep 1 133 21 1 0 0 0 0
2 0 sleep 0 132 10 0 0 0 0 0
4 0 sleep 0 4 4 0 0 0 0 0
5 1 runble 2 2 1 2 0 0 0 0
6 1 runble 2 2 1 2 0 0 0 0
7 0 run 0 4 1 0 0 0 0 0
8 0 runble 0 4 0 0 0 0 0 0
9 0 runble 0 4 0 0 0 0 0 0
```
**After sometime [5-10 seconds]**
```
PID Prio State rtime wtime nrun q0 q1 q2 q3 q4
1 0 sleep 1 460 21 1 0 0 0 0
2 0 sleep 0 459 10 0 0 0 0 0
4 0 sleep 0 331 4 0 0 0 0 0
5 3 sleep 27 304 253 2 3 10 29 0
6 3 sleep 26 305 253 2 3 17 21 0
7 4 sleep 28 303 249 2 3 5 9 9
8 0 sleep 2 329 264 2 0 0 0 0
9 0 sleep 4 327 260 4 0 0 0 0
```
**After considerable amount of time [30 seconds to 1 minute]**
```
PID Prio State rtime wtime nrun q0 q1 q2 q3 q4
1 0 sleep 1 945 21 1 0 0 0 0
2 0 sleep 0 944 10 0 0 0 0 0
4 0 sleep 0 816 4 0 0 0 0 0
5 3 sleep 70 746 614 2 3 10 71 38
6 4 runble 73 743 609 2 3 17 48 57
7 4 run 65 751 601 2 3 5 9 46
8 0 sleep 5 811 635 5 0 0 0 0
9 0 sleep 9 807 626 9 0 0 0 0
```
Things to Note:
* The last two processes are created to behave like IO bound processes, remaining are CPU intensive.
* Process with pid 8 and 9 should have higher number of nruns. **11 marks**
* They should mostly remain in queue 0 or 1 and have low number of ticks [2-3 ticks]. **3 marks**
* Processes with pid 5,6,7 should fluctuate between queue 2/3/4 and have considerable number of ticks in these queues. **3 marks**
* The time the processes 5,6,7 spend in q0 and q1 should match the given ouput. **3 marks**
**Marks: 0 if not working else 20**
Things to check in code:
- General logic seems good.
- Ageing **1 mark**
- Demotion of processes to a lower priority queue if process used up the time slice of the present queue. **1 mark**
- Initial Condition: A process is running in queue number 2 and there are no processes in both queues 1 and 0.
Now if another process enters in queue 0, then the current running process (residing in queue number 2) must be preempted and the process in queue 0 should be allocated the CPU. **3 marks**
**Marks: 0 if not implemented else 5**
## General viva [5 marks]
Some pointer question you can ask:
- How did you implement the strace logic? How did you handle the different number of arugements for different function calls? [For Strace only]
- What is preemption? How is the PBS you implemented not preemptive. [for PBS only]
- How did you implement the queues present in MLFQ scheduling? [for MLFQ only]
- How did you handle timer interrupts in regards to all the schedulers you have implemented?
- What are IO bound and CPU bound processes.
## Specification 3: Procdump [10 marks]
* Check priority has been implemented for both PBS and MLFQ. **(3 marks)**
* rtime and wtime is implemented **(2 marks)**
* nruns **(3 marks)**
* q_i **(2 marks)**
## Specification 4: Report [10 marks]
* Description in README on how they implemented various schedulers: **(4 marks)**
* Answer to the MLFQ question: **3 marks**:
* **Question:** If a process voluntarily relinquishes control of the CPU (eg.For doing I/O),it leaves the queuing network, and when the process becomes ready again after the I/O, it is inserted at the tail of the same queue, from which it is relinquished earlier). Explain in the README how could this be exploited by a process.
* **Sample answer:** A process can exploit the above scheduler algorithm by doing a redundent I/O just before its time slice of a particular queue is getting over. The CPU would think that it is a I/O bound or interactive process but in reality the process could be an intensive CPU bound process. Still the process can ensure that it gets more priority and remain in a higher priority queue.
* Performance analysis: **3 marks** (1 mark for each scheduler)
## Bonus [10 marks]
* An explanation of why the CPU bound has a graph with the processes spending more time in queue 3/4. (marks: 2)
* An explanation of why the IO bound has a graph with the processes spending more time in queue 0/1/2. (marks: 2)
* Corresponding graphs for the same. (marks: 6)