# LTM On Tap # Make file - The *targets* are file names, separated by spaces. Typically, there is only one per rule. - The *commands* are a series of steps typically used to make the target(s). These *need to start with a tab character*, not spaces. - The *prerequisites* are also file names, separated by spaces. These files need to exist before the commands for the target are run. These are also called *dependencies* - A rule generally looks like this: ```makefile targets: prerequisites command command command ``` - `$@` is an [automatic variable](https://makefiletutorial.com/#automatic-variables) that contains the target name - The important variables used by implicit rules are: - `CC`: Program for compiling C programs; default `cc` - `CXX`: Program for compiling C++ programs; default `g++` - `CFLAGS`: Extra flags to give to the C compiler - `CXXFLAGS`: Extra flags to give to the C++ compiler - `CPPFLAGS`: Extra flags to give to the C preprocessor - `LDFLAGS`: Extra flags to give to compilers when they are supposed to invoke the linker # Network Overview - ISO/OSI model: - Application: HTTP, SMTP, POP3, FTP - Transport: TCP, UDP - Network: IP, ICMP - Datalink: ARP, PPP, MAC - Physical - IPv6 - 8 *segments* and any hexadecimal value between 0 and FFFF - Valid IPv6 addresses: - `2001 : db8: 3333 : 4444 : 5555 : 6666 : 7777 : 8888` - `2001 : db8 : 3333 : 4444 : CCCC : DDDD : EEEE : FFFF` - `: :` (implies all 8 segments are zero) - `2001: db8: :` (implies that the last six segments are zero) - `: : 1234 : 5678` (implies that the first six segments are zero) - `2001 : db8: : 1234 : 5678` (implies that the middle four segments are zero) - `2001:0db8:0001:0000:0000:0ab9:C0A8:0102` (This can be compressed to eliminate leading zeros, as follows: `2001:db8:1::ab9:C0A8:102` ) # Socket - Ta có thể chia các công cụ được sử dụng trong IPC thành 3 loại chính: - **Communication**: Dùng để trao đổi dữ liệu giữa các process. - **Synchronization**: Hoạt động đồng bộ giữa các process. - **Signal**: Mặc dù signal sinh ra với mục đích khác, nhưng ta vẫn có thể sử dụng chúng như một công cụ đồng bộ trong một vài tình huống. Hoặc hiếm hơn là sử dụng signal như công cụ giao tiếp: signal number được coi như là một thông tin. ![](https://i.imgur.com/agYU00d.png) - **Pipe:** A pipe is a data channel that is unidirectional. Two pipes can be used to create a two-way data channel between two processes. This uses standard input and output methods. Pipes are used in all POSIX systems as well as Windows operating systems. - **Socket:** The socket is the endpoint for sending or receiving data in a network. This is true for data sent between processes on the same computer or data sent between different computers on the same network. Most of the operating systems use sockets for interprocess communication. - **File:** A file is a data record that may be stored on a disk or acquired on demand by a file server. Multiple processes can access a file as required. All operating systems use files for data storage. - **Signal:** Signals are useful in interprocess communication in a limited way. They are system messages that are sent from one process to another. Normally, signals are not used to transfer data but are used for remote commands between processes. - **Shared Memory:** Shared memory is the memory that can be simultaneously accessed by multiple processes. This is done so that the processes can communicate with each other. All POSIX systems, as well as Windows operating systems use shared memory. - **Message Queue:** Multiple processes can read and write data to the message queue without being connected to each other. Messages are stored in the queue until their recipient retrieves them. Message queues are quite useful for interprocess communication and are used by most operating systems. - **Unix domain socket**: - Data communications endpoint for exchanging data between processes executing **on the same host** operating system - Use the file system as their address name space - Processes reference Unix domain sockets as file system inodes, so two processes can communicate by **opening the same socket**. ![](https://i.imgur.com/JjmKK41.png) - recv = read (flags = 0) - send = write (flags = 0) ![](https://i.imgur.com/ZG6RJJR.png) # File descriptor - Stdin: 0, stdout: 1, and stderr: 2 - File descriptor (FD) is a small non-negative integer that helps in identifying an open file within a process while using input/output resources like network sockets or pipes # Frame parse 1. Field fixed size - Fixed size → text-string representations harder 2. Use appropriate delimeters (@, -,...) - Use delimeter: - TH1: `send`: `ab` `c` `@def@` thì phải `recv`nhiều lần mới nhận được chuỗi `abc` từ đó xử lý rồi sau đó xử lý `def@` - TH2: `send` hai lần nhưng `recv` chỉ 1 lần là nhận đủ dữ liệu 3. Attach len vào message 4. `send` xong thì client `recv` sẽ `send` lại cho sender 1 signal để dứt điểm dữ liệu (không tối ưu) # Signal **Date:** April 26, 2022 - Signals are various notifications sent to a process in order to notify it of various "important" events - Each signal may have a signal handler, which is a function that gets called when the process receives that signal. The function is called in "**asynchronous mode**", meaning that no where in your program you have code that calls this function directly. Instead, when the signal is sent to the process, the operating system stops the execution of the process, and "forces" it to call the signal handler function. When that signal handler function returns, the process continues execution from wherever it happened to be before the signal was received, as if this interruption never occurred. - A signal is basically a one-way notification. - A signal can be sent by the kernel to a process, by a process to another process, or a process to itself ![](https://i.imgur.com/3zqpq0H.png) # I/O model **Date:** April 26, 2022 - Blocking I/O - Non-blocking I/O - I/O multiplexing (select() and poll()) - Signal driven I/O (SIGIO signal) - Asynchronous I/O (aio_functions) - Two phases for an input operation: - waiting for the data to be ready in the kernel - copying data from the kernel to the process **Blocking I/O Model** Figure 6.1. Blocking I/O model. ![](https://i.imgur.com/b0yahPV.gif) We use UDP for this example instead of TCP because with UDP, the concept of data being "ready" to read is simple: either an entire datagram has been received or it has not. With TCP it gets more complicated, as additional variables such as the socket's low-water mark come into play. In the examples in this section, we also refer to recvfrom as a system call because we are differentiating between our application and the kernel. Regardless of how recvfrom is implemented (as a system call on a Berkeley-derived kernel or as a function that invokes the getmsg system call on a System V kernel), there is normally a switch from running in the application to running in the kernel, followed at some time later by a return to the application. In Figure 6.1, the process calls recvfrom and the system call does not return until the datagram arrives and is copied into our application buffer, or an error occurs. The most common error is the system call being interrupted by a signal. We say that our process is blocked the entire time from when it calls recvfrom until it returns. When recvfrom returns successfully, our application processes the datagram. **Non-blocking I/O Model** When we set a socket to be nonblocking, we are telling the kernel "when an I/O operation that I request cannot be completed without putting the process to sleep, do not put the process to sleep, but return an error instead Figure 6.2. Nonblocking I/O model. ![](https://i.imgur.com/2pBy3tC.gif) The first three times that we call recvfrom, there is no data to return, so the kernel immediately returns an error of EWOULDBLOCK instead. The fourth time we call recvfrom, a datagram is ready, it is copied into our application buffer, and recvfrom returns successfully. We then process the data. When an application sits in a loop calling recvfrom on a nonblocking descriptor like this, it is called polling. The application is continually polling the kernel to see if some operation is ready. This is often a waste of CPU time, but this model is occasionally encountered, normally on systems dedicated to one function. **I/O multiplexing Model** With I/O multiplexing, we call select or poll and block in one of these two system calls, instead of blocking in the actual I/O system call Figure 6.3. I/O multiplexing model. ![](https://i.imgur.com/zIN7PWt.gif) We block in a call to select, waiting for the datagram socket to be readable. When select returns that the socket is readable, we then call recvfrom to copy the datagram into our application buffer. Comparing Figure 6.3 to Figure 6.1, there does not appear to be any advantage, and in fact, there is a slight disadvantage because using select requires two system calls instead of one. But the advantage in using select, which we will see later in this chapter, is that we can wait for more than one descriptor to be ready. - Another closely related I/O model is to use multithreading with blocking I/O. That model very closely resembles the model described above, except that instead of using select to block on multiple file descriptors, the program uses multiple threads (one per file descriptor), and each thread is then free to call blocking system calls like recvfrom. **Signal-Driven I/O Model** Figure 6.4. Signal-Driven I/O model. ![](https://i.imgur.com/3KLXx0K.gif) We first enable the socket for signal-driven I/O and install a signal handler using the sigaction system call. The return from this system call is immediate and our process continues; it is not blocked. When the datagram is ready to be read, the SIGIO signal is generated for our process. We can either read the datagram from the signal handler by calling recvfrom and then notify the main loop that the data is ready to be processed, or we can notify the main loop and let it read the datagram. Regardless of how we handle the signal, the advantage to this model is that we are not blocked while waiting for the datagram to arrive. The main loop can continue executing and just wait to be notified by the signal handler that either the data is ready to process or the datagram is ready to be read. ****Asynchronous I/O model**** Figure 6.5. Asynchronous I/O model. ![](https://i.imgur.com/QqtVkPU.gif) We call aio_read (the POSIX asynchronous I/O functions begin with aio_ or lio_) and pass the kernel the descriptor, buffer pointer, buffer size (the same three arguments for read), file offset (similar to lseek), and how to notify us when the entire operation is complete. This system call returns immediately and our process is not blocked while waiting for the I/O to complete. We assume in this example that we ask the kernel to generate some signal when the operation is complete. This signal is not generated until the data has been copied into our application buffer, which is different from the signal-driven I/O model. As of this writing, few systems support POSIX asynchronous I/O. We are not certain, for example, if systems will support it for sockets. Our use of it here is as an example to compare against the signal-driven I/O model. ****Comparison of the I/O Models**** ![](https://i.imgur.com/VuYZhfP.gif) - A *synchronous I/O operation* causes the requesting process to be blocked until that I/O operation completes - Blocking - Non-blocking - I/O multiplexing: `select`, `poll`, `epoll` (block) - Signal-driven I/O - An *asynchronous I/O operation*  does not cause the requesting process to be blocked - Asynchronous I/O - `select`: - Monitor được 1024 fd một lần - Chỉ báo tập socket monitor thì chỉ báo socket nào readable → phải duyệt từng `fd` hỏi có readable được không? - Phải điền maxfd + 1 để monitor luôn fd 0, 1, 2 → lãng phí - Truyền vào maxfd + 1 (mảng 4 fd, gồm stdin: 0, stdout: 1, stderr: 2), thì fd tiếp theo là ở vị trí thứ 3 trong array. Nhưng **chỉ monitor 1 fd** - `poll`: - Không giới hạn số fd để monitor - Có thể **truyền vào danh sách `fd` để monitor** chứ không phải truyền maxfd → tránh lãng phí - Giống select phải duyệt từng fd để hỏi có readable hay không - `epoll`: - Giống `poll`, nhưng **trả về bao nhiều thằng fd có thể readable** (một number) - Chạy vòng lặp để duyệt readable → nhanh hơn - shutdown function - close(): - decrements the reference count of a socket and closes it only when it reaches to zero - close terminal both directions of data transfer - shutdown(): (phổ biến hơn) - close a socker immediately without looking to the reference count - can close only read-half or write-half of a connection (ko đọc dữ liệu nữa nhưng vẫn truyền dữ liệu hoặc ko truyền dữ liệu nữa nhưng vẫn đọc dữ liệu) # Raw socket - Allow you to bypass the TCP/UDP layers - Để bắt được ở tầng transport thì phải khai báo raw socket (socket xuyên tầng) ở tầng data link - Send/receive your own packets with your own headers - You need to do all protocol processing at user-level (tự bóc tách header, dữ liệu) - TCP/UDP never reach raw sockets - Kernel IP stack handles these - Linux implementation is an exception - All ICMP except - ICMP echo request - Timestamp request - Mask request - All IGMP - All other protocols that kernel doesn’t understand: OSPF # Trace route - Gửi 3 gói UDP, TTL = 1 - Khi tới router → TTL = 0 → router gửi trả về gói ICMP báo lỗi - Gửi 3 gói UDP, TTL = 2 - Khi tới router 2 → TTL = 0 → router gửi trả về gói ICMP báo lỗi - Windows thì gửi gói ICMP # Datalink socket - 3 common methods to access the datalink layer under Unix: - BSD Packet Filter (BPF) ![](https://i.imgur.com/Lzucvx2.gif) - SVR4 Datalink Provider Interface (DLPI) ![](https://i.imgur.com/RejO5Wq.gif) - Linux SOCK_PACKET interface ```c fd = socket(PF_PACKET, SOCK_RAW, htons(ETH_P_ALL)); /* newer systems*/ or fd = socket(AF_INET, SOCK_PACKET, htons(ETH_P_ALL)); /* older systems*/ ``` # Broadcast | Type | IPv4 | IPv6 | TCP | UDP | IP interfaces indentified | IP interfaces delivered to | | --- | --- | --- | --- | --- | --- | --- | | Unicast | ✅ | ✅ | ✅ | ✅ | One | One | | Anycast | Optional nhưng chưa implement | ✅ | Not yet | ✅ | A set | One in set | | Multicast | Optional | ✅ | ❌ | ✅ | A set | All in set | | Broadcast | ✅ | ❌ | ❌ | ✅ | All | All | - Broadcasting and multicasting require datagram transport such as UDP or raw IP - ARP, DHCP, NTP (Network Time Protocol), Routing daemons - Khi gói tin broadcast tới router **sẽ bị drop** **Unicast versus Broadcast** **Unicast example of a UDP datagram** ![](https://i.imgur.com/XHHwgGw.png) - Gói tin sẽ đi từ application → datalink → Đường truyền - Gói tin đầu tiên sẽ kiểm tra địa chỉ MAC có bằng `00:...:bc:b4` hay không chuyển tới IPv4 → kiểm tra IP bằng `192.168.42.3` hay không chuyển tới UDP → kiểm tra UDP rồi cuối cùng tới application **Example of a broadcast UDP datagram** ![](https://i.imgur.com/dIR8RbJ.png) - Gói tin sẽ đi từ application → datalink → Đường truyền - B và C đều nhận được gói tin - Ở B: MAC: `ff:..:ff` datalink cho phép đi qua → IP Broadcast đúng với hệ thống, đẩy lên tầng UDP → Không thấy app nào xài port 520 → discard → **phí performance** → IPv6 bỏ broadcast # Multicast - IPv4 broadcasting must be recorded to multicasting in IPv6 - A multicast datagram is only received by those hosts interested in such receipt → có thể drop ở datalink | One-to-Many (1-M) | Many-to-Many (M-M) | Many-to-One (M-1) | | --- | --- | --- | | Audio/Video (lectures, presentations, concerts) | Conferencing (A/V conferencing, whiteboards) | Resource Discovery (location services, device descovery) | | Push Media/Announcements (news, weather, time) | Games (multi-player, simulators) | Data Collection (data monitoring applications, video surveilliance) | | Distribution (binary executables) | Resource Sharing (distributed databases) | Miscellaneous (auctions, polling) | | Monitoring (stock prices) | Distributed OS (concurrent collaboration, distance learning) | | **IPv4 Class D Addresses** Class D addresses, in the range 224.0.0.0 through 239.255.255.255, are the multicast addresses in IPv4 (Figure A.3). The low-order 28 bits of a class D address form the multicast group ID and the 32-bit address is called the group address. Figure 21.1 shows how IP multicast addresses are mapped into Ethernet multicast addresses. We also show the mapping for IPv6 multicast addresses to allow easy comparison of the resulting Ethernet addresses. Figure 21.1. Mapping of IPv4 and IPv6 multicast address to Ethernet addresses. ![](https://i.imgur.com/A286DMv.gif) Considering just the IPv4 mapping, the high-order 24 bits of the Ethernet address are always 01:00:5e. The next bit is always 0, and the low-order 23 bits are copied from the low-order 23 bits of the multicast group address. The high-order 5 bits of the group address are ignored in the mapping. This means that 32 multicast addresses map to a single Ethernet address: The mapping is not one-to-one. The low-order 2 bits of the first byte of the Ethernet address identify the address as a universally administered group address. Universally administered means the high-order 24 bits have been assigned by the IEEE and group addresses are recognized and handled specially by receiving interfaces. There are a few special IPv4 multicast addresses: - 224.0.0.1 is the `all-hosts` group. All multicast-capable nodes (hosts, routers, printers, etc.) on a subnet must join this group on all multicast-capable interfaces. (We will talk about what it means to join a multicast group shortly.) - 224.0.0.2 is the `all-routers` group. All multicast-capable routers on a subnet must join this group on all multicast-capable interfaces. The range 224.0.0.0 through 224.0.0.255 (which we can also write as 224.0.0.0/24) is called link local. These addresses are reserved for low-level topology discovery or maintenance protocols, and datagrams destined to any of these addresses are never forwarded by a multicast router. We will say more about the scope of various IPv4 multicast addresses after looking at IPv6 multicast addresses. **IPv6 Multicast Addresses** The high-order byte of an IPv6 multicast address has the value ff. Figure 21.1 shows the mapping from a 16-byte IPv6 multicast address into a 6-byte Ethernet address. The low-order 32 bits of the group address are copied into the low-order 32 bits of the Ethernet address. The high-order 2 bytes of the Ethernet address are 33:33. The low-order two bits of the first byte of the Ethernet address specify the address as a locally administered group address. Locally administered means there is no guarantee that the address is unique to IPv6. There could be other protocol suites besides IPv6 sharing the network and using the same high-order two bytes of the Ethernet address. As we mentioned earlier, group addresses are recognized and handled specially by receiving interfaces. Two formats are defined for IPv6 multicast addresses, as shown in Figure 21.2. When the p flag is 0, the T flag differentiates between a well-known multicast group (a value of 0) and a transient multicast group (a value of 1). A P value of 1 designates a multicast address that is assigned based on a unicast prefix (defined in RFC 3306 [Haberman and Thaler 2002]). If the P flag is 1, the T flag also must be 1 (i.e., unicast-based multicast addresses are always transient), and the plen and prefix fields are set to the prefix length and value of the unicast prefix, respectively. The upper two bits of this field are reserved. IPv6 multicast addresses also have a 4-bit scope field that we will discuss shortly. RFC 3307 [Haberman 2002] describes the allocation mechanism for the low-order 32 bits of an IPv6 group address (the group ID), independent of the setting of the P flag. Figure 21.2. Format of IPv6 multicast addresses ![](https://i.imgur.com/3ZffI3F.gif) There are a few special IPv6 multicast addresses: - ff01::1 and ff02::1 are the all-nodes groups at interface-local and link-local scope. All nodes (hosts, routers, printers, etc.) on a subnet must join these groups on all multicast-capable interfaces. This is similar to the IPv4 224.0.0.1 multicast address. However, since multicast is an integral part of IPv6, unlike IPv4, this is not optional. Although the IPv4 group is called the all-hosts group and the IPv6 group is called the all-nodes group, they serve the same purpose. The group was renamed in IPv6 to make it clear that it is intended to address routers, printers, and any other IP devices on the subnet as well as hosts. - ff01::2, ff02::2 and ff05::2 are the all-routers groups at interface-local, link-local, and site-local scopes. All routers on a subnet must join these groups on all multicast-capable interfaces. This is similar to the IPv4 224.0.0.2 multicast address. **Multicasting versus Broadcasting on a LAN** Figure 21.4. Multicast example of a UDP datagram. ![](https://i.imgur.com/cGerhic.gif) The receiving application on the rightmost host starts and creates a UDP socket, binds port 123 to the socket, and then joins the multicast group 224.0.1.1. We will see shortly that this "join" operation is done by calling setsockopt. When this happens, the IPv4 layer saves the information internally and then tells the appropriate datalink to receive Ethernet frames destined to 01:00:5e:00:01:01. This is the Ethernet address corresponding to the multicast address that the application has just joined using the mapping we showed in Figure 21.1. The next step is for the sending application on the leftmost host to create a UDP socket and send a datagram to 224.0.1.1, port 123. Nothing special is required to send a multicast datagram: The application does not have to join the multicast group. The sending host converts the IP address into the corresponding Ethernet destination address and the frame is sent. Notice that the frame contains both the destination Ethernet address (which is examined by the interfaces) and the destination IP address (which is examined by the IP layers). We assume that the host in the middle is not IPv4 multicast-capable (since support for IPv4 multicasting is optional). This host ignores the frame completely because: (i) the destination Ethernet address does not match the address of the interface; (ii) the destination Ethernet address is not the Ethernet broadcast address; and (iii) the interface has not been told to receive any group addresses (those with the low-order bit of the high-order byte set to 1, as in Figure 21.1). The frame is received by the datalink on the right based on what we call imperfect filtering, which is done by the interface using the Ethernet destination address. We say this is imperfect because it is normally the case that when the interface is told to receive frames destined to one specific Ethernet multicast address, it can receive frames destined to other Ethernet multicast addresses, too. When told to receive frames destined to a specific Ethernet multicast address, many current Ethernet interface cards apply a hash function to the address, calculating a value between 0 and 511. One of 512 bits in an array is then turned ON. When a frame passes by on the cable destined for a multicast address, the same hash function is applied by the interface to the destination address (which is the first field in the frame), calculating a value between 0 and 511. If the corresponding bit in the array is ON, the frame is received; otherwise, it is ignored. Older interface cards reduce the size of the bit array from 512 to 64, increasing the probability that an interface will receive frames in which it is not interested. Over time, as more and more applications use multicasting, this size will probably increase even more. Some interface cards today already have perfect filtering (the ability to filter out datagrams addressed to all but the desired multicast addresses). Other interface cards have no multicast filtering at all, and when told to receive a specific multicast address, must receive all multicast frames (sometimes called multicast promiscuous mode). One popular interface card does perfect filtering for 16 multicast addresses as well as having a 512-bit hash table. Another does perfect filtering for 80 multicast addresses, but then has to enter multicast promiscuous mode. Even if the interface performs perfect filtering, perfect software filtering at the IP layer is still required because the mapping from the IP multicast address to the hardware address is not one-to-one. Assuming that the datalink on the right receives the frame, since the Ethernet frame type is IPv4, the packet is passed to the IP layer. Since the received packet was destined to a multicast IP address, the IP layer compares this address against all the multicast addresses that applications on this host have joined. We call this perfect filtering since it is based on the entire 32-bit class D address in the IPv4 header. In this example, the packet is accepted by the IP layer and passed to the UDP layer, which in turn passes the datagram to the socket that is bound to port 123. There are three scenarios that we do not show in Figure 21.4: 1. A host running an application that has joined the multicast address 225.0.1.1. Since the upper five bits of the group address are ignored in the mapping to the Ethernet address, this host's interface will also be receiving frames with a destination Ethernet address of 01:00:5e:00:01:01. In this case, the packet will be discarded by the perfect filtering in the IP layer. 2. A host running an application that has joined some multicast group whose corresponding Ethernet address just happens to be one that the interface receives when it is programmed to receive 01:00:5e:00:01:01. (i.e., the interface card performs imperfect filtering). This frame will be discarded either by the datalink layer or by the IP layer. 3. A packet destined to the same group, 224.0.1.1, but a different port, say 4000. The rightmost host in Figure 21.4 still receives the packet, which is accepted by the IP layer, but assuming a socket does not exist that has bound port 4000, the packet will be discarded by the UDP layer. This demonstrates that for a process to receive a multicast datagram, the process must join the group and bind the port. - IP Multicast over UDP suffers from the standard limitations of the IP protocol itself with UDP: - Unreliable packet delivery - Duplicate packet delivery - Network congestion (as there is no built-in congestion avoidance mechanism as in TCP) # SSL ### Secure Socket Layer Protocols: There is four types of protocols: - SSL Record protocol - Handshake protocol - Alert protocol - Change Cipher Spec protocol ### SSL record protocol - SSL Record provides two services for SSL connection. - Confidentiality - Message Integrity - In the SSL Record Protocol application data is divided into fragments. The fragment is compressed and then encrypted MAC (Message Authentication Code) generated by algorithms like SHA (Secure Hash Protocol) and MD5 (Message Digest) is appended. After that encryption of the data is done and in last SSL header is appended to the data. ### Handshake protocol - **Handshake Protocol** is used to establish sessions. This protocol allows the client and server to authenticate each other by sending a series of messages to each other. Handshake protocol uses four phases to complete its cycle. - **Phase 1**: Both Client and Server send hello-packets to each other. In this IP session, cipher suite and protocol version are exchanged for security purposes. - **Phase 2**: Server sends his certificate and Server-key-exchange. The server end phase-2 by sending the Server-hello-end packet. - **Phase 3**: The client replies to the server by sending his certificate and Client-exchange-key. - **Phase 4**: Change-cipher suite occurred and after this Handshake Protocol ends. ### Alert protocol - Used to transmit warnings to the other party such as the security cannot be established when the information is given from the available options, the certificate received is invalid, or the certificate has expired registration... - The level is further classified into two parts: - **Warning**: This Alert has no impact on the connection between sender and receiver. - **Fatal Error**: This Alert breaks the connection between sender and receiver. ### Change cipher Spec protocol - This protocol uses the SSL record protocol. Unless Handshake Protocol is completed, the SSL record Output will be in a pending state. After the handshake protocol, the Pending state is converted into the current state. - Change-cipher protocol consists of a single message which is 1 byte in length and can have only one value. This protocol’s purpose is to cause the pending state to be copied into the current state. ### Difference between SSL and TLS: | SSL | TLS | | --- | --- | | Support Fortezza algorithm | Does not support | | The 3.0 Version | The 1.0 version | | Message digest is used to create master secret | Pseudo-random function is used to create master secret | | Message authentication Code protocol is used | Hashed message Authentication Code protocol is used | | Complex than TLS | Simple | | Less security | Provide high security | # Client - Server Design | Row | Server Decription | Process Control CPU time (Difference from baseline) | | --- | --- | --- | | 0 | Iterative Server (baseline) | 0.0 | | 1 | Concurrent Server, one fork per client request | 20.90 | | 2 | Pre-fork, each child calling accept | 1.80 | | 3 | Pre-forking, file locking around accept | 2.07 | | 4 | Pre-forking, thread mutex locking around accept | 1.75 | | 5 | Pre-fork, parent passing descriptor to child | 2.58 | | 6 | One thread per client request | 0.99 | | 7 | Pre-threaded, mutex locking to protect accept | 1.93 | | 8 | Pre-threaded, main thread calling accept | 2.05 | - Creating a pool of children or a pool of threads reduces process control CPU time compared to one-fork-per-client - Some implementations allow multiple children or threads to block in a call to `accept`, while others need some type of lock around `accept` - Having all children or threads accept is simpler and faster than having main thread call accept and then pass descriptor to child or thread - Using threads is normally faster than using processes TCP Pre-forked Server - No Locking around Accept - Multiple processes calling accept on the same listening descriptor - With N children, reference count for listening descriptor would be N+1 (cộng thêm process cha) - Tất cả process sẽ bị block ở accept - Client khi tới sẽ wake up tất cả process, thằng nào lấy trước thì sẽ thực hiện, còn lại ngủ (sleep) tiếp. - File Locking Around Accept - Process nào vô `lock wait` (lấy cái lock) trước thì sẽ vô `accept` trước, `accept` xong thì `release` cho process khác nhảy vào. Khi dó thì client tới thì chỉ có 1 process accept - Thời gian tăng vì tốn thời gian đóng mở lock (mở lock → đóng lock). - Thread Locking Around Accept - Thread locking faster than file locking - Descriptor Passing - Using a stream pipe: Unix domain stream socket - Slower than “locking around accept” versions - Overhead of writing to the stream pipe TCP Pre-threaded Server - Main Thread Accept: - **Thực tế solution 8 nhanh hơn solution 7!!!** # Out of band - Out-of-band data - Expedited data - Notification should be sent before any normal (inband) data that is already queued to be sent - Higher priority than normal data - Out-of-band data mapped onto existing connection (instead of using two connections) - UDP has no implementation of out-of-band data - TCP has its own flavor of out-of-band data - OOB data conveys 3 different pieces of information to receiver - Sender went into urgent mode (notification transmitted immediately after sender sends OOB byte) - Existence of an OOB mark - Actual value of OOB byte - Usefulness of OOB data depends on why it is being used by the application - Special mode of processing for any data it receives after the OOB - Discard all data up to the OOB mark ---