# ECS150 Project2
## Members
- Daisuke Horiuchi (919535721)
- Ken Taniguchi (919537457)
## Summary
The goal of our project is to create a user-level threads library
named `uthread`. With `uthread`, create multiple threads in a process
and create an API for managing them as a static library.
When multiple threads are used, a lock is applied to ensure data integrity,
interrupts are generated like timers to prevent one thread from taking too long,
and threads are switched in a round-robin scheduling.
`uthread` can be implemented by user-level calls as APIs.
## queue API
The Queue API provides functions at the user-level
to perform operations on queues.
Our queue API is a First In First Out structure, which means that
it is possible to retrieve data in order of oldest to newest.
To achieve `O(1)` operations, we created Queue using linked lists.
Each element is represented by a node, and the queue is represented
by linking addresses.
#### The queue struct has the following elements:
- head: address to the oldest element
- tail: address to the latest element
- length: length of the queue
#### The following six functions are provided as the queue API:
- `queue_create`
- `queue_destroy`
- `queue_dequeue`
- `queue_delete`
- `queue_iterate`
- `queue_length`
### queue_create
Creating an empty queue with a length of 0.
### queue_destroy
Deleting a queue and releasing used memory.
### queue_dequeue
Removing the oldest element from the queue and moving head to the next element.
### queue_delete
For a given data, if there is the same data in the queue, delete it.
If there are two or more, delete the oldest one.
The link of the queue is unlinked by moving the pointer to the next node.
The deleted node releases its memory.
### queue_iterate
Change the value of each element of the queue by adapting
the given function to each element.
### queue_length
Returns the length of elements in the queue.
## uthread API
The uthread API provides thread-related functions to the user-level.
A thread is an execution flow that runs concurrently within a process.
In this project, we implement parallel processing, there is always one thread
that consumes CPU time.
### Each thread is in one of the following states:
| State | Description |
| -------- | -------- |
| running | a thread that is currently executing |
| ready | a thread that is waiting executing |
| blocked | a thread that is blocked by semaphore |
| zombie | a thread that has completed execution |
Each thread has its data stored in the `uthread_tcb` struct and maintains its
state, context, and stack. Each thread is managed
by Ready queue, Blocked queue, and Current thread shared by this process.
#### The uthread API provides following four functions to the user-level:
- `uthread_run`
- `uthread_create`
- `uthread_yield`
- `uthread_exit`
### uthread_run
When `uthread_run` is called, one process runs by this function,
and other multiple threads could be created by the process.
For a given function, use `uthread_create()` to create a thread.
As long as there is a ready queue or current thread,
`uthread_yield()` is called and CPU is allocated to each thread in turn.
### uthread_create
Creates a new thread for the given function and puts it into the ready queue.
### uthread_yield
If there are waiting threads, get the oldest thread among them.
The currently the running thread is stopped and the acquired thread is executed.
The stopped thread is stored in the ready queue.
### uthread_exit
Sends the currently executing thread to the Zombie state and releases
its memory. Then, the next thread to be executed is retrieved
from the ready queue and it is executed.
### Helper functions (Following three functions are not uthread API)
#### uthread_current
Returns the currently executing thread.
#### uthread_block
Block the currently executing thread. Moves to the currently executing thread
Blocked queue, stops execution, and executes the thread in the ready queue.
#### uthread_unblock
Place the blocked thread in the waiting queue. Remove the given thread
from the Blocked queue and move it to the ready queue.
## semaphore API
The semaphore API provides functions related to semaphores for exclusion control
to the user side. When a resource is accessed by multiple threads,
if the resources are used at the same time, it will not be consistent.
Therefore, it is necessary to always allow access to only one thread.
This is called exclusion control. Semaphore is used to realize
this exclusion control by increasing or decreasing the value of semaphore
each time a resource is accessed.
#### A semaphore, defined by a `struct`, called `semaphore` contains following:
- `look`: a lock to exclusively control the value of the semaphore
- `internal_value`: value of the semaphore (x>0)
- `waiting queue`: a queue of threads blocked by the semaphore
Since this project assumes a concurrent process using multiple threads and
not a parallel process, spinlock is not necessary,
but we implemented spinlock for extensibility.
#### Semaphore provides four functions to the user-level:
- `sem_create`
- `sem_destroy`
- `sem_down`
- `sem_up`
### sem_create()
A semaphore is created by `sem_create()`. Only the number of threads given as
the initial value can access the resources managed by this semaphore
at this moment.
### sem_destroy()
After the use of semaphore, `sem_destroy()` will be deleted and it will release
memory.
### sem_down()
When accessing a resource managed by a semaphore, `sem_down()` is called to
check if the `internal_value` is greater than 0, that is,
if the thread can access the resource, and if so, the resource is accessed.
If the `internal_value` is 0, the thread cannot access the resources
at this moment and will be stored in the waiting_queue
and move the state to a blocking state.
While the resource is being accessed, the `internal_value` is decreased by one
to indicate to semaphore that the resource is being used.
### sem_up()
This releases access to a resource blocked by the above functions.
To tell the semaphore that access has been released,
increase the `internal_value` by one.
Then, retrieve the oldest blocked task stored in `waiting_queue`
and change the state from blocked to ready.
## preemption
The Preemption provides the following four functions to the user-level.
- `preempt_disable`
- `preempt_enable`
- `preempt_start`
- `preempt_stop`
### preempt_disable
Start a critical section, preventing other threads from being executed
by timer interrupt. Use this function to perform operations that
must not be interrupted or interfered with under any circumstances.
### preempt_enable
Terminate a critical section. When this function is called,
the preempt signal is re-enabled and the timer interrupt is triggered again.
### preempt_start
Generates 100 signals per second. These signals call `uthread_yield()` to
prevent one thread from using too much CPU time.
### preempt_stop
Stops the signals that were set to be generated by preempt_start().
### Testing Preempt
Our implementation of `test_preempt.c` is to consume CPU execution time
during the execution of `thread1`. The purpose of this test case is to see
whether one thread can be correctly switched to another thread
by Round-Robin scheduling.
The difficulty in writing this test case was that
using sleep() does not consume CPU execution time.
Therefore, we decided to use clock_t to consume CPU execution time.