# Hadoop筆記(2) *5/23 ## Fully Distributed [上課講義](https://hackmd.io/@yillkid/r1f7HrFHc/https%3A%2F%2Fhackmd.io%2F%40yillkid%2FrJANg-pBq) * Master與Slaves的主機要先用ssh確定彼此都能連進彼此主機後才能開始進行fully distributed的設定 * 關閉防火牆 ```$ sudo ufw disable``` * 綁定主機名稱,之後就用主機名稱取代ip,設定好後要重新登入 ```py= #Master $ hostnamectl set-hostname master #Slave1 $ hostnamectl set-hostname slave1 #Slave2 $ hostnamectl set-hostname slave2 ``` * 修改Hosts 設定檔,Master與所有的Slave都要設定。 ```py= #路徑 /etc/hosts $ sudo vi /etc/hosts <MASTER_IP_ADDRESS> master <SLAVE1_IP_ADDRESS> slave1 <SLAVE2_IP_ADDRESS> slave2 ``` ![](https://hackmd.io/_uploads/By5kY45S2.png) * 修改四個設定檔 ``` 路徑: $ cd /usr/local/hadoop/etc/hadoop $ sudo vim core-site.xml $ sudo vim hdfs-site.xml $ sudo vim mapred-site.xml $ sudo vim yarn-site.xml ``` #### core-site.xml: * fs.defaultFS是NameNode位置;fs.tmp.dir是NameNode暫存資料夾 ```py= <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://master:9000</value> <description></description> </property> <property> <name>fs.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> <description></description> </property> </configuration> ``` #### hdfs-site.xml * dfs.replication 代表一台 Node 要儲存幾份檔案,不能超過機器數量 * dfs.namenode.name.dir 是FsImage 鏡像的目錄,存放 hadoop 的 namenode 的 metadata * dfs.namenode.data.dir 是HDFS 文件目錄 ```py= <configuration> <property> <name>dfs.replication</name> <value>2</value> <description></description> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/hdfs/name</value> <description></description> </property> <property> <name>dfs.namenode.data.dir</name> <value>/usr/local/hadoop/hdfs/data</value> <description></description> </property> </configuration> ``` #### yarn-site.xml * 設定RM的hostname為master * 告知 NodeManager 要額外啟用 Shuffle 服務 * 設定RM上的各項服務,port號可以先jps查看服務的pid,然後用指令sudo netstat -ntlp | grep pid找接近的埠號 ```py= <configuration> <property> <name>yarn.resourcemanager.hostname</name> <value>master</value> <description></description> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description></description> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.nodemanager.address</name> <value>master:44357</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> </configuration> ``` #### mapred-site.xml * 只留下配置 MapReduce 框架屬性 ```py= <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description></description> </property> </configuration> ``` * 若跑map-reduce出此設定檔error,此設定當可改回: ```py= <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description></description> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> </configuration> ``` #### 各項角色的定義 * master 節點與slave/worker 節點 * 到路徑/usr/local/hadoop/etc/hadoop/下新增works、slaves、master三個檔案(沒有副檔名) ``` /usr/local/hadoop/etc/hadoop/works: slave1 slave2 /usr/local/hadoop/etc/hadoop/slaves: slave1 slave2 /usr/local/hadoop/etc/hadoop/master: master ``` #### 確定無密碼登入 * master重新產生一組公私鑰,把公鑰複製到master自己與slaves的authorized_keys,可以用【ssh 帳號@slaves】確定是否可以無密碼登入 ```py= $ ssh-keygen $ cd ~/.ssh/ $ cat id_rsa.pub >> authorized_keys #然後再把內容複製到slaves主機的 ~/.ssh/authorized_keys ``` ### 把四個設定檔複製到slaves ```py= scp /usr/local/hadoop/etc/hadoop/* <slave_username>@<slave_IP>:/usr/local/hadoop/etc/hadoop/ ``` ### Restart Master #### Stop ```py= $ cd /usr/local/hadoop/sbin/ $ ./stop-dfs.sh ; ./stop-yarn.sh ``` #### 刪暫存 ```py= $ sudo rm -rf /usr/local/hadoop/hdfs/name/current $ pkill -9 java $ sudo rm /usr/local/hadoop/tmp/* $ rm -rf /tmp/hadoop* ``` #### JAVA環境變數 ```py= $ export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ ``` #### 格式化NameNode ```py= $ /usr/local/hadoop/bin/hdfs namenode -format ``` #### 啟動Hadoop * master 啟動後,會自動帶起 slave 節點 ```py= $ cd /usr/local/hadoop/sbin/ $ ./start-dfs.sh ; ./start-yarn.sh ``` ### 完成啟動服務 * 網址可打http://master:9870/看概覽 * 在HDFS開資料夾或是推資料 ```py= $ /usr/local/hadoop/bin/hdfs dfs -mkdir /test ``` ![](https://hackmd.io/_uploads/HynvprcBn.png) * 到Slave檢查是否可以看到資料夾/資料 ```py= /usr/local/hadoop/bin/hdfs dfs -ls / ``` ![](https://hackmd.io/_uploads/B1YyEUqrh.png) * Slave可以試著推檔案上去HDFS ```py= $ touch a.txt $ /usr/local/hadoop/bin/hdfs dfs -put a.txt /test/ ``` * Master確認是否有檔案新增 ```py= /usr/local/hadoop/bin/hdfs dfs -ls /test/ ``` ![](https://hackmd.io/_uploads/Skgl7aNU2.png) * 可上傳檔案後執行map-reduce ```py= /usr/local/hadoop/bin/hadoop jar '/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-3.3.5.jar' \ -mapper 'python3 mapper.py' \ -file /home/samantha/mapper.py \ -reducer 'python3 reducer.py' \ -file /home/samantha/reducer.py \ -input hdfs:/workshop/input.txt \ -output hdfs:/workshop/output ``` * 監控 application: ```py= $ /usr/local/hadoop/bin/yarn application -list ``` * 監聽 jps: ```py= $ watch jps ``` * 若master的node manager起不來,可以修改works與workers內容加上master。 ```py= $ sudo vi /usr/local/hadoop/etc/hadoop/works $ sudo vi /usr/local/hadoop/etc/hadoop/workers ``` ![](https://hackmd.io/_uploads/S11LwbPL3.png) * Slave的yarn-site.html內要改成自己的,slave1:xxxxx ![](https://hackmd.io/_uploads/rJQX_-wI2.png) * 開始跑map-reducde後,會在master/slave1/slave2的watch jps看到服務會不斷分散在三者之間,包含YarnChild、MRAppMaster等 ![](https://hackmd.io/_uploads/S1D8K-DIh.jpg) ![](https://hackmd.io/_uploads/SyikcZPI2.png) * 執行完成後可以到網頁打8088確認RM與App的相關細節,以及每個任務是哪個節點執行的。 ![](https://hackmd.io/_uploads/rkuStfPLn.jpg) ![](https://hackmd.io/_uploads/BkZPKMwIn.jpg) * Workshop: https://hackmd.io/@uf57VA2KROuAfxAa6dLehQ/HyLpeF4In ###### tags: `hadoop` `fully distributed` 補充 https://hackmd.io/@WVuRUYslRZm9PNDSsjjDiA/HJXpBevSh https://hackmd.io/@WVuRUYslRZm9PNDSsjjDiA/H1lu4gqHn