# Web Server Express in Java
~~[GitHub](https://github.com/NeoJay0705/JExpress.git)~~
# Protocol
A protocol can category to four part
1. Define a format of content to be transferred
2. Transport layer
- TCP
- UDP
3. Parse the content from the input stream of socket
4. Request/Respond through the output stream of socket
# HTTP
HTTP protocol uses TCP as the transport layer. The content includes four parts
1. Status line
2. Headers
3. Empty line
4. Payload
### Status Line
```
// Request
GET / HTTP/1.1
// Response
HTTP/1.1 200 OK
```
### Headers
```
// Request
Host: www.google.com
Content-Type: text/plain
Content-Length: 100
// Response
Content-Type: text/plain
Content-Length: 999
```
### Empty Line
The empty line is used to know where the payload is start from.
### Payload
The payload can put any of binary data that will be parsed by a format declared in the header of `Content-Type`.
# Web Server
A web server does a certain of works step by step, such as
1. Listen on a port
2. Read I/O stream
3. Call a corresponded function
4. Write I/O stream
For non-blocked requirement, the server shouldn't accept a new request after the previous request finished. A solution is to assign a new thread for each request to realize the non-blocked requirement. What if the number of requests is really large? For time slicing in scheduling in OS, when the number of thread increases, the each execution time of thread will decrease. Addition, the context switch has a constant cost, the percentage of the cost increase when the execution time decrease.
To solve this problem, you can use other servers with load balance to keep the ratio of # of thread in # of CPU by $$$. Or you can implement a thread pool in constant amount of work threads to concentrate CPU tasks so that the CPU will be always busy without the cost of context switch and I/O blocking.
# Flow
```
listen <----
| |
V |
new thread ---
|
V
parse content
|
------------
| |
V V
requestObj responseObj
```
In HTTP protocol, the header of `Content-Length` is necessary if you have payload in the request or response. The header told the server or client when to close the input stream to go to the next step or the step will be blocked until the socket closed.
# Framework
```java=
handler = (req, res) {};
server.listen(port, handler)
```
## `(req, res)`
To think what are in the request and response object is really difficult. You need to know lots of context in the use case to meet requirements of development.
Thanks for pioneer of servlet framework, we can reuse the interface of `HttpServletRequest` and `HttpServletResponse` that have experienced for long time. When you read the spec of those interface, will notice that much ideas are still used in modern web frameworks, such as authorization.
Because we are not going to implement a servlet container, should implement those interface by self.
# Socket I/O
To parse the protocol, we read the content from the input stream of socket. It's easy to know the end of headers by the empty line. But when do we stop reading the payload? The input stream doesn't told you the end of stream reached(`return -1`).
We can write the size of the payload in the header(`Content-Length`) and implement a new input stream to package the input stream of socket.
```java=
public int read() throws IOException {
if (contentLength > 0) {
contentLength--;
return socketInputStream.read();
}
return -1;
}
```
# Design Pattern
In the rest of works, we could begin to develop our flow for developing by the design patterns.
## Handler
```java=
public interface RequestHandler {
void serve(HttpServletRequest req, HttpServletResponse res);
}
```
A handler that serves one request only doesn't satisfy our desires. We need a dispatcher to store many of handlers that are invoked based on the content in the request, for example in the HTTP, `GET /`.
An interest thing is that the dispatcher is a handler as well. Why?
```java=
public class Dispatcher implements RequestHandler {
private Map<ResourceWithMethod, RequestHandler> handlerManager;
public void serve(HttpServletRequest req, HttpServletResponse res) {
method = req.getMethod();
resource = req.getResource();
handlerManager.get(resource+method).serve(req, res);
}
}
```
## Layer
So far, we can add many handlers to meet the request by the method and resource by the dispatcher that scaling horizontally. But how about the scaling vertically.
A layer can connect another layer and determine whether to invoke the next layer or not. To keep the original use case, we have to fit `server.listen(port, layer)` with `server.listen(port, (req, res) {})`, the layer extends the `RequestHandler` as well in addition of other methods to link layers.
---
Now, you can add authentication layer to determine a request authenticated, if YES goes to next layer, or stop to do error process; authorization layer to determine a request authorized to access the resources; exception layer to handle the entire same exception together instead of being in the each handler; and whatever layer you want. By the way, the dispatcher is a layer actually.
---
## Layer Builder
An use scenario is our expectation.
```
Layer entry = new ALayer()
.add(new BLayer())
.add(new CLayer());
```
```java=
public Layer add(Layer next) {
this.next = next;
return next;
}
```
What's the problem of the above implementation?
- Ans: The `entry` will always be the last layer, for example, `CLayer`. Because the last `.add()` returns its parameter not the first `ALayer`.
To solve the problem, we can implement a builder to keep the first layer for `server.listen(port, layer)` and the last layer for `.add()`
```
// First, .add(ALayer) for initial adding
head ---
|
v
ALayer
^
|
last ---
// Second, .add(BLayer)
head ---
|
v
ALayer ---> BLayer
^
|
last ---------------
```
```java=
public class LayerBuidler {
private Layer head;
private Layer last;
public LayerBuidler add(Layer layer) {
if (head == null) {
head = layer;
last = layer;
} else {
last = last.add(layer);
}
return this;
}
public Layer build() {
return head;
}
}
```
###### tags: `System Programming`