# UE22AM251B Big Data Lab 3
Monitoring Hadoop health and performance.
## Assignment Objectives and Outcomes
* **Understanding Hadoop and MapReduce:** Gain a deep understanding of Hadoop, the Hadoop Distributed File System (HDFS), and the MapReduce programming model. Understand how Hadoop distributes and processes large datasets.
* **Performance Metrics:** Learn about key performance metrics and factors that affect the performance of MapReduce jobs, such as job execution time, data locality, resource utilization, and network latency.
* **Log Analysis:** Understand how to analyze log files generated by Hadoop daemons, including the JobTracker, TaskTracker, NameNode, and DataNode logs. Extract valuable information to identify performance bottlenecks and issues.
## Software/Languages to be used
1. Hadoop ```v3.3.6```
## Tasks Overview
Collect and analyze HDFS metrics for monitoring and performance analysis from both the NameNode, DataNodes and resource managers.
HDFS emits metrics from two sources, the NameNode and the DataNodes, and for the most part each metric type must be collected at the point of origination. Both the NameNode and DataNodes emit metrics over an HTTP interface as well as via JMX.
You will need to run 3 map reduce jobs concurrently, and monitor the performance of the datanodes and namenodes on ```localhost:8088``` and ```localhost:9870```.
:::info
Hostnames and ports for Hadoop JPS processes are not fixed, they may vary, but they are all noted and visible on terminal when running the map reduce task (hadoop jar command) and exposed at the endpoint```http://hostname:port```
:::
The NameNode offers a summary of health and performance metrics through an easy-to-use web UI. By default, the UI is accessible via port 9870, so point a web browser at: [```http://namenodehost:9870```](http://localhost:9870).
The same metrics are accesible at the jmx endpoint, which is required to be submitted ([```http://namenodehost:9870/jmx```](http://localhost:9870/jmx)).
A high-level overview of the health of your DataNodes is available in the NameNode dashboard, under the Datanodes tab: [```localhost:9870/dfshealth.html#tab-overview```](localhost:9870/dfshealth.html#tab-overview)
The same metrics are accesible at the jmx endpoint, which is required to be submitted ([```http://datanodehost:9864/jmx```](http://datanodehost:9864/jmx))
## Task Specifications
You have to code a wordcount mapper and reducer to count the frequency of each word in the text file given [here](https://drive.google.com/file/d/1rH6x144b59hd8pZDJ43G0mMXu872vjZY/view?usp=sharing). This map reduce job is to be run concurrently(on 3 separate terminals) to separate hdfs output directories.
### Task 1
Open up namenode host and datanode hosts to observe memory and performance metrics. Run 3 MapReduce jobs parallely, refresh each of the pages and observe changes in job status, disk and resource usage.
* Localhost:8088 (Resource manager dashboard)

* Localhost:9870 (Namenode dashboard)

<br />
* Navigate to ```Datanodes```

<br />
The same metrics can be collected via a jmx endpoint, like so:
```
curl "http://<namenodehost>:<namenodeport>/jmx" > srn_task1_namenode.json
curl "http://<datanodehost>:<datanodeport>/jmx" > srn_task1_datanode.json
```
### Task 2
This task involves intentionally limiting java heap memory, attempting to run all 3 jobs as done in the same manner as task 1 and recording stats.
The aim here is the intentionally play around and affect some aspects of Hadoop daemon behavior, so that the same jobs that were successful in task 1, turn out failed, and to look for how the performance metrics reflect that.
You can limit or decrease the overall heap size allocated to any map reduce task by adding the following property to ```mapred-site.xml``` in ```hadoop-3.3.6/etc/hadoop``` directory.
:::info
<b>Add the following property under the configuration tag in /etc/hadoop/mapred-site.xml
```
<property>
<name>mapred.child.java.opts</name>
<value>-Xmx64m</value>
</property>
```
Or/And alternatively, you can set the same in ```hadoop-env.sh``` script in in ```hadoop-3.3.6/etc/hadoop``` directory.
Uncomment the HADOOP_CLIENT_OPTS variable and set the memory limit, like so:
```
export HADOOP_CLIENT_OPTS="-Xmx64m $HADOOP_CLIENT_OPTS"
```
</b>
:::
Now stop all hadoop daemon processes, format tmp directories and restart hadoop in ```sbin```:
```
./stop-all.sh && sudo rm -rf ~/dfsdata/
&& sudo rm -rf ~/tmpdata/ && hdfs namenode -format
start-all.sh
```
Run the task 1 jobs in the same manner, open up the dashboards.


Collect the same metrics via a jmx endpoint, like so:
```
curl "http://<namenodehost>:<namenodeport>/jmx" > srn_task2_namenode.json
curl "http://<datanodehost>:<datanodeport>/jmx" > srn_task2_datanode.json
```
:::info
Compare your JMX Json output files from both tasks side by side, and look for drastic differences in key value pairs such as ```MemHeapUsedM```, ```TotalCompilationTime```, ```TotalReadTime```, ```SystemLoadAverage```, ```GcTimeMillis```, ```SystemCpuLoad```, ```ProcessCpuLoad```. If a value is 0, or x times more than it's task 1's counterpart, that should indicate an unsuccessful job's behaviour in terms of it's performance.
:::
## Submission Guidelines
You will need to make the following changes to your mapper.py and reducer.py scripts to run them on the portal.
Include the following shebang on the first line of your code
`
#!/usr/bin/env python3
`
Convert your files to executables
`
chmod +x mapper*.py reducer*.py
`
Convert line breaks in DOS format to Unix format (this is necessary if you are coding on Windows - your code will not run on our portal otherwise)
`
dos2unix mapper*.py reducer*.py
`
## Tasks Deliverables
Submit mapper, reducer, all your JMX Json output files (NameNode and DataNode) for both the tasks, in the specified naming format.
>srn_mapper.py
srn_reducer.py
srn_task1_namenode.json
srn_task1_datanode.json
srn_task2_namenode.json
srn_task2_datanode.json
>
## Helpful Commands
### Running the MapReduce Job without Hadoop
A MapReduce job can also be run without Hadoop. Although slower, this utility helps you debug faster and helps you isolate Hadoop errors from code errors.
```
cat path_to_dataset | python3 mapper.py
[command line arguments] | sort -k 1,1 |
python3 reducer.py [command line arguments] > output.txt
```
## HDFS Operations
The ```HDFS``` supports all file operations and is greatly similar to the file system commands available on Linux.
You can access ```HDFS``` on command line using ```hdfs dfs``` and use the ```-``` prefix before the filesystem command to execute general Linux file system commands.
#### Starting Hadoop
Navigate to the hadoop folder and execute the following commands. start-all.shis a shell script that is used to start all the processes that hadoop requires.
```
cd
cd hadoop-3.3.3/sbin/
./start-all.sh
```
Type jps to find all the Java Processes started by the shell script. You should see a total of 6 processes, including the jps process. Note that the order of the items and the process IDs will be different.
```
2994 DataNode
3219 SecondaryNameNode
3927 Jps
3431 ResourceManager
2856 NameNode
3566 NodeManager
```
#### Loading a file into HDFS
A file can be loaded into ```HDFS``` using the following command.
```hdfs dfs -put path_to_file /hdfs_directory_path```
#### Listing files on HDFS
Files can be listed on HDFS using
```hdfs dfs -ls /hdfs_directory_path```
Similarly, ```HDFS``` also supports ```-mkdir```, ```-rm``` and more.
### Running a MapReduce Job
A MapReduce job can be run using the following command
```
hadoop jar path-to-streaming-jar-file \
-input path_to_input_file_on_hdfs \
-output path_to_output_folder_on_hdfs \
-mapper absolute_path_to_mapper.py command_line_arguments \
-reducer absolute_path_to_reducer.py command_line_arguments
```
Path to jar file example: hadoop jar /home/USER/hadoop-3.3.6/share/hadoop/tools/lib/hadoop-streaming-3.3.6.jar
USER should be replaced with your vm user id or can also be replaced with $USER which automatically takes your vm's user id.
#### To check the output, execute the following command
```
hdfs dfs -cat /output_directory_created_on_hdfs/part-00000
```
#### To save the output to a file:
```
hdfs dfs -cat /output_directory_created_on_hdfs/part-00000 > filename.txt
```
#### To delete directory from hdfs cluster
```
hdfs dfs -rm -r /directory_name
```
Similary to delete files:
```
hdfs dfs -rm -r /directory_name/file_name
```
## Useful links
1. https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/Metrics.html
2. https://www.datadoghq.com/blog/monitor-hadoop-metrics/
3. https://stackoverflow.com/questions/8464048/out-of-memory-error-in-hadoop?rq=3