<div style="text-align: center">
<span style="font-size: 3em; font-weight: 700; font-family: Consolas">
Big-Data: <br>
Lab 01
</span>
<br><br>
<span style="">
A lab-assignment for <code>CSC14118</code> "Introduction to Big Data" @ 18CLC-KHMT
</span>
</div>
## Collaborators
- `18127080` **Kiều Vũ Minh Đức** ([@kvmduc](https://github.com/kvmduc))
- `18127231` **Đoàn Đình Toàn** ([@t3bol90](https://github.com/t3bol90))
- `18127004` **Nguyễn Vũ Thu Hiền**([@ngvuthuhien](https://github.com/ngvuthuhien))
- `18127132` **Bùi Thành Long**([@btlong](https://github.com/btlong))
---
<div style="page-break-after: always"></div>
# Lab 1: Setup Environment for run Hadoop System
In this lab, we planning to do only single cluster set-up. But at the research progress, we found a suitable artical (which can help us understand what to do to config a simple Fully distibuted mode with real computers. At the end, we sucessed and learned a great deal in this assignment).
## Install single cluster
### Step 1: Install software
If your cluster doesn’t have the requisite software you will need to install it.
#### $ sudo apt-get install ssh

#### $ sudo apt-get install pdsh

### Step 2: Install java and jdk
If your cluster doesn't have java you will need to install it.
### Step 3: Adding New User
We should add a new userhadoop:
#### $ sudo addgroup hadoop
#### $ sudo adduser --ingroup hadoop hadoopuser
#### $ sudo adduser hadoopuser sudo
#### $ sudo apt-get install openssh-server
#### $ su - hadoopuser
### Step 3: Download and unpackage hadoop
Go to: https://mirror.downloadvn.com/apache/hadoop/common/ choose hadoop version you want and download file hadoop-x.x.x.tar.gz
Or if you want version not in here, you can find it in google.

Next you unpackage it and move hadoop folder to where you want
#### $ sudo tar -xvzf hadoop-x.x.x.tar.gz
#### $ sudo mv hadoop-x.x.x /usr/local/hadoop
#### $ sudo chown -R hadoopuser /usr/local
### Step 4: Prepare to Start the Hadoop Cluster
First, we need change ~/.bashrc file
Open file with command:
#### $ sudo nano ~/.bashrc
Then, you copy the following lines and paste at the end of the file
><p><br>export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64</br>
><br>export HADOOP_HOME=/usr/local/hadoop</br>
><br>export PATH=$PATH:$HADOOP_HOME/bin</br>
><br>export PATH=$PATH:$HADOOP_HOME/sbin</br>
><br>export HADOOP_MAPRED_HOME=$HADOOP_HOME</br>
><br>export HADOOP_COMMON_HOME=$HADOOP_HOME</br>
><br>export HADOOP_HDFS_HOME=$HADOOP_HOME</br>
><br>export YARN_HOME=$HADOOP_HOME</br>
><br>export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/native</br>
><br>export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/native</br></p>

You need change java version jdk for computer suit
Save, back to terminal and run command:
#### $ source ~/.bashrc
Next, we need change hadoop-env.sh file
Open file with command:
#### $ sudo nano /usr/local/hadoop/etc/hadoop/hadoop-env.sh
Copy the following lines and paste at "#export JAVA_HOME="
>export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64

Tips: We can press Ctrl+W and search "export JAVA_HOME="
You need change java version jdk for computer suit
Save and back terminal
Next we change core-site.xml file
Open file with command:
#### $ sudo nano /usr/local/hadoop/etc/hadoop/core-site.xml
Copy the following lines in "" and paste at between "<configuration>" and "</configuration>"
><br><property>
><name>fs.default.name</name>
><value>hdfs://localhost:9000</value>
><property></br>

Save and back to terminal
Next we change hdfs-site.xml file
Open file with command:
#### $ sudo nano /usr/local/hadoop/etc/hadoop/hdfs-site.xml
Copy the following lines in "" and paste at between "<configuration>" and "</configuration>"
><br><property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.name.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/namenode</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<value>file:/usr/local/hadoop_tmp/hdfs/datanode</value>
</property>"</br>

Save and back to terminal
Next, we need change yarn-site.xml file
Open file with command:
#### $sudo nano /usr/local/hadoop/etc/hadoop/yarn-site.xml
Copy the following lines in "" and paste at between "<configuration>" and "</configuration>"
></br><property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property></br>

Save and back to terminal
Final, we need change mapred-site.xml file
Open file with command:
#### $ sudo nano /usr/local/hadoop/etc/hadoop/mapred-site.xml
Copy the following lines in "" and paste at between "<configuration>" and "</configuration>"
><br><property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property></br>

Save and back to terminal
### Step 5: Run these commands to create directories
We run sequence command:
#### $ sudo mkdir -p /usr/local/hadoop_space
#### $ sudo mkdir -p /usr/local/hadoop_space/hdfs/namenode
#### $ sudo mkdir -p /usr/local/hadoop_space/hdfs/datanode
#### $ sudo chown -R hadoopuser /usr/local/hadoop_space

### Step 6: Setup passphraseless ssh
Now check that you can ssh to the localhost without a passphrase:
#### $ sudo service ssh start
#### $ ssh localhost

If you cannot ssh to localhost without a passphrase, execute the following commands:
#### $ cd ~/.ssh
#### $ ssh-keygen
Generate a public/private rsa key pair; use the default options(Enter for each line)
#### $ cat id_rsa.pub >> authorized_keys
#### $ chmod 640 authorized_keys
#### $ sudo service ssh restart
#### $ ssh localhost

### Step 7: Running Hadoop
We run sequence command:
#### $ cd
#### $ hdfs namenode -format

Press Y and Enter

#### $ start-dfs.sh

#### $ start-yarn.sh

#### $ jps

If not see data node, you can follow command:
#### $ rm -r /usr/local/hadoop_tmp/hdfs/namenode
#### $ rm -r /usr/local/hadoop_tmp/hdfs/datanode
#### $ stop-all.sh
Then you rework step 7.
# Install Fully distributed
## Config Slave Node
> In this install, because of the short due date and busy at midterm so we are trying our best to install it with 2 machines (1 as NameNode + DataNode, 1 as DataNode). We think this is imposible to install it because of there is no tutorial or artical step-by-step about this set-up. This is hard, really hard and after do this task, we've learned a lot.
The first step (we have done is config `ssh`). Set-up Fully Distributed Mode require a rack (each computer need to be able to communicate with each other).

> In this picture, 2 users on 2 machine can `ssh` with each other (by join in the same network).

> We have to deal with another problem (`ssh` passwordless to make Hadoop work). Because when we try ti skip `ssh` with normal mode, it cause a error and we need to rollback to config it.
> There is the basic idea: Using `ssh-genkey` to make your private and public key on file `rsa_id.pub`, the copy it to the file of `~/.ssh/authorized_keys` of the target user's computer. Then you can `ssh` with passwordless mode. Do not forget to give premission to this file after create it. (chmod 777 always work :>).


Next, we config on the `core-site.xml`, `hdfs-site.xml`, on DataNode base on this aritcle from [Digital Ocean](https://www.digitalocean.com/community/tutorials/how-to-spin-up-a-hadoop-cluster-with-digitalocean-droplets#step-5-%E2%80%94-configure-the-master-node). But, this artical is a set-up for 4 machines already on the same network, their are for business so it's quick and easy to install Hadoop. But, in the 2 real slow and cheap laptops for student, it is not easy as we thought.

> Like, a network issues. Hadoop can not resolve the hostname of 2 domain if it represent as IP address. So in Hadoop 2.x, it have a quick and dirty solution is install Hadoop 2.7 (only 2.7) and add this line to `hdfs-site.xml`:

It turn of the ip-hostname check method of Hadoop but in 3.x, it is can not solve our problem (We installed Hadoop version 3.1.4). It lead to be a Computer Network solution - edit hosts file. (Every tutorial about install Single Cluster or Pseudo have this activity on step 0, but now we know `Why we have to do that?` after a hour research on internet).

<!> Note that: There are many issues about network communication between 2 laptop on a coffee network (where you can access directly to the router to config Port Forwarding or turn of firewall..). Hence is some this we have done to install it successfully: Turn-off firewall on some port, disable IPV6 [sof](https://stackoverflow.com/questions/15758641/why-should-disable-ipv6-hadoop-installation), chown a folder to make it executable for 2 users on a dual machine.
After a while [sof](https://stackoverflow.com/questions/44429976/datanode-denied-communication-with-namenode-because-hostname-cannot-be-resolved), our problem have solved by adding more properties:

Then, we connect this DataNode to the NameNode above. And the NameNode reconized my machine (with a 170 GB SSD).

Yay, but when running `jsp`:

We try to investigate what is happening with my DataNode (because it was running on my computer as process and you can check it with `top` command).
Open the log file and scoll to the end:


Seem like our NameNode can not response to our DataNode. (At this time, we have known the issue of Hadoop hostname, but forgot to config it for NameNode - hence, it took 1 hour to read the log file of 2 Nodes to figure out this error comefrom DataNode or NameNode).
After solve the Network problem, we have to deal with another problem, permision of user's file [sof](https://stackoverflow.com/questions/30688011/hadoop-name-node-format-fails):

The DataNode user (`hdoop` is create hdfs-data file but not have permision to write it if another user have create it before, so we need to chown this folder for our DataNode user). This is the state of the folder in the time Hadoop system start to run (using `start-all.sh`). 
Instead of that unexpected behavior, this folder need to have `rw` permision or full permision to keep the system workable.

## Config Master Node
We also config on the `core-site.xml`, `hdfs-site.xml`, `mapred-site.xml` base on this aritcle from [Digital Ocean](https://www.digitalocean.com/community/tutorials/how-to-spin-up-a-hadoop-cluster-with-digitalocean-droplets#step-5-%E2%80%94-configure-the-master-node) on another computer that we choose to be a NameNode
Add following lines to `hadoop-env.sh`
>```
>export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
>export HDFS_NAMENODE_USER="hdoop"
>export HDFS_DATANODE_USER="hdoop"
>export HDFS_SECONDARYNAMENODE_USER="hdoop"
>export YARN_RESOURCEMANAGER_USER="hdoop"
>export YARN_NODEMANAGER_USER="hdoop"```




But we found out that everything we did lead to an error : although we generated public-private key and send to ssh, when we try to connect ssh, a requirement of public-private key shows up. We have to reinstall from the begining on MasterNode computer. Moreover, on MasterNode we use another version, we think that may cause to incompatible so reinstall is a safe solution. (to have the same version with DataNode)

We also create another user and set role to root in order to access without limitation, (change user to `hdoopms`)

We have to config `core-site.xml`, `hdfs-site.xml`, `mapred-site.xml` and `yarn-site.xml` like above


This is the critical step that we found out we missed, this cause to the problem that my NameNode denied connection from DataNode, since DataNode IP and name must be define in `/etc/hosts` (Which is configed on DataNode but not NameNode).

>This will lead to this problem, NameNode has turn on correctly but it can't find DataNode

After create a new account, we config it in `/.ssh/config`, since we only have 2 computer, and to simplify the problem, we just use 2 computer instead of 3


After fix all the bug and serveral hours struggling with this `Network configuration` problem. We did it, but it will take a quite longer to config it for 4 machines for all our teammates so we keep it for the next due time.

## Self Evaluation
Although we have some struggles at the begining, but we finally figured out how to connect a Fully Distributed Cluster and a Single Cluster. We think that we have learned a lot throght this project.
In this assignment, we can do more than a installation tutorial (but we did not) with a bunch of bugs preventation. Hence, we could run a demo on the newly Fully Distributed Cluster with four machines set-up.
## References
- https://www.digitalocean.com/community/tutorials/how-to-spin-up-a-hadoop-cluster-with-digitalocean-droplets#step-5-%E2%80%94-configure-the-master-node
- https://www.digitalocean.com/community/tutorials/initial-server-setup-with-ubuntu-16-04
- https://www.digitalocean.com/community/tutorials/how-to-create-a-sudo-user-on-ubuntu-quickstart
- The stackoverflow articles cited on this report.
---
<footer>
<p style="float:left; width: 20%;">
FIT HCMUS 18KHMT-CLC, 2021
</p>
<p style="float:left; width: 60%; text-align:center;">
</a>
Have a good day, today. <br> Be happy, stay safe, stay healthy everyone
</p>
<p style="float:left; width: 20%;">
</p>
</footer>