# Hadoop筆記 ### [安裝好Hadoop](https://hackmd.io/@yillkid/HybR5uFr5#) ### [SSH金鑰設置](https://hackmd.io/@yillkid/SJ2YYfA43) *5/18~5/19 ## Hadoop家族:MapReduce → YARN → HDFS ### Map Reduce 負責資料的對應與歸約  ### YARN (Yet Another Resource Negotiator) 工作資源排程 * Yarn主要由兩大service組成:ResourceManager、NodeManager * RM(Resource Management)包含兩個元件: * 定時排程器 (Scheduler) * 應用管理器 (ApplicationManager) * 負責管理 Client 提交的應用,監控以及應用程式的狀態跟蹤 * Node Manager * 是 ResourceManager 在每臺機器的代理 * 負責 Container 的管理 * 監控資源使用情況(CPU、硬體、磁碟及網路等) * 定期向 ResourceManager/Scheduler 提供資源使用報告 * 再由 ResourceManager 決定對節點的資源進行何種操作(分配,回收) 等 * Container * 負責封裝底層的資源,由 NodeManager 啟動和管理,並監控;被 ResourceManager 排程 * ApplicationMaster * 當 Client 提交一個 Application 時,就會新建一個 ApplicationMaster,由這個 ApplicationMaster 去與ResourceManager 申請容器資源,獲得資源後會將要執行的程式傳送到容器上啟動,然後進行分散式計算   ### HDFS (Hadoop Distributed File System) 1. Name Node * 用來指揮其他的節點,因此,一個 HDFS 集群 (cluster) 只能有一個命名節點 2. Data Node * 當一個檔案被 NameNode 承認後,隨即被分配到 DataNode,具備:儲存、讀寫資料的功能 3. Secondary NameNode: * 主要負責分擔 NameNode 的壓力、備份 NameNode 狀態、或是替 NameNode 做損壞後的狀態恢復   ## Hadoop三種mode: ### Standalone * Hadoop中grep的基本語法: ```<hadoop> 執行檔 jar <PATH>/<MapReduce>.jar grep <input> <output> <正規表示法>``` #### Workshop ``` #hadoop執行檔路徑: /usr/local/hadoop/bin/hadoop #執行java檔案的指令: jar #hadoop提供的java mapreduce檔案: /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.5.jar #最後執行的指令 $ /usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.5.jar grep /home/admin1/input/ /home/admin1/output/ 'dfs[a-z.]+' #注意輸出的資料夾不能先創,必須讓hadoop程式創才能成功,成功後會產生兩個檔案,一為檔名為_SUCCESS的空檔,一為查到的結果 ``` Output:  此結果等同於在linux command下執行: ```$ grep 'dfs' ~/input/*``` ### Pseudo-Distributed * 要改四個設定檔,每個都要加入name(屬性名稱), value(屬性質), description(屬性描述)的標籤 ``` $ cd /usr/local/hadoop/etc/hadoop $ sudo vim core-site.xml $ sudo vim hdfs-site.xml $ sudo vim mapred-site.xml $ sudo vim yarn-site.xml ```  * [語法](https://drive.google.com/drive/folders/1_vta2lBpboxCYTDd0-fF8luhHNhFcEG6):(把要設定value塞在標籤內即可) ![Uploading file..._r0e8c9wt7]() * <span style= "background:#FFDDAA"> core-site.xml</span> ```py= <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> <description></description> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/samantha/tmp</value> </property> </configuration> ``` * <span style= "background:#FFDDAA"> hdfs-site.xml</span> ```py= <configuration> <property> <name>dfs.replication</name> <value>1</value> <description></description> </property> </configuration> ``` * <span style= "background:#FFDDAA"> mapred-site.xml</span> ```py= <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> <description></description> </property> <property> <name>yarn.app.mapreduce.am.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> <property> <name>mapreduce.map.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> <property> <name>mapreduce.reduce.env</name> <value>HADOOP_MAPRED_HOME=/usr/local/hadoop</value> </property> </configuration> ``` * <span style= "background:#FFDDAA"> yarn-site.xml</span> ```py= <configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> <description></description> </property> </configuration> ``` * 格式化NameNode: ```$ /usr/local/hadoop/bin/hdfs namenode -format``` * 啟動服務的shell路徑: * <span style="background:#FFDDAA"> start-dfs.sh</span> * <span style="background:#FFDDAA"> start-yarn.sh</span> ``` $ cd /usr/local/hadoop/sbin/ # ./start-dfs.sh $ ./start-yarn.sh #要增加環境變數,可寫到.bashrc內 export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ #若要關閉服務可以使用 stop-dfs.sh stop-yarn.sh stop-all.sh #重啟服務時記得要 $ pkill -9 java $ rm -rf /home/yillkid/tmp/* 檢查程序 $ jps $ sudo netstat -ntlp 格式化 $ /usr/local/hadoop/bin/hdfs namenode -format 啟動 namenode $ ./start-dfs.sh port 9000 $ sudo netstat -ntlp $ ./start-yarn.sh ``` * 新增環境變數 .bashrc,新增後要載入 source ~/.bashrc ```py= export JAVA_HOME=/usr/lib/jvm/java-11-openjdk-amd64/ export HADOOP_HOME=/usr/local/hadoop/ export PATH=$HADOOP_HOME/bin:$PATH export PATH=$HADOOP_HOME/bin:$HADOOP_HOME/sbin:$PATH ``` * 查看Hadoop的服務:指令jps  * 查看服務啟用狀態:指令sudo netstat -ntlp 確認port9000的DFS有起來,確認www服務在port98XX有起來  * 公私鑰都會在 .ssh內,公鑰為id_rsa.pub,私鑰為id_rsa #### Pseudo-districuted / Mapreduced 1. Hadoop Pseudo-Distributed 即會一併啟用 Web Service 在瀏覽器可以用 ip:9870看namenode起來的整體狀況 Nodes of the cluster : 8088 NodeManager information : 8042 DataNode : 9864 Overview : 9870  2. HDFS * 建立資料夾 在 HDFS的根目錄下 ```$ /usr/local/hadoop/bin/hdfs dfs -mkdir /test``` * 複製*xml至text資料夾底下 ```$ /usr/local/hadoop/bin/hdfs dfs -put /usr/local/hadoop/etc/hadoop/*.xml /test``` * 建立官方的 example 所需要的目錄 ```$ /usr/local/hadoop/bin/hdfs dfs -mkdir /user``` ```$ /usr/local/hadoop/bin/hdfs dfs -mkdir /user/samantha``` * 執行MapReduce,完成會自動產生output的資料夾 ```/usr/local/hadoop/bin/hadoop jar /usr/local/hadoop/share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.5.jar grep /test /user/samantha/output 'dfs[az.]+'```  * 若要看當前job卡住是否有被yarn分配資源運行,可用指令看監聽的狀況 ```$ ./yarn application -list``` * 若要刪除HDFS上的資料夾或檔案,可使用指令 ```./hdfs dfs -rm /tmp/a``` ```./hdfs dfs -rmdir /tmp``` * 若要取得HDFS上的資料,可使用指令 ```/usr/local/hadoop/bin/hdfs dfs -get /user/samantha/output/*``` #### 撰寫自己的MapReduced檔案 * 之前範例都用是hadoop的java檔案做MapReduced,可寫自己的python檔案做MapReduced  1. mapper.py ```py= import sys for line in sys.stdin: line = line.strip() # 去除首尾空格 words = line.split() for word in words: print(word + "," + "1") ``` 2. reduced.py ```py= import sys line_input = [] for line in sys.stdin: line = line.strip() arr_line = line.split(",") line_input.append(arr_line) result = {} for item in line_input: key = item[0] count = int(item[1]) if key in result: result[key] += count else: result[key] = count #Dict的迭代 for key, value in result.items(): print(f"{key},{value}") ``` 3. 可在command執行 ```$ echo "Deer Bear River Car Car River Deer Car Bear" | python3 mapper.py | python3 reduced.py``` 4. 在Pseudo的模式下使用撰寫的MapReduced檔案 ```$ echo Deer Bear River Car Car River Deer Car Bear > book.txt``` ```$ /usr/local/hadoop/bin/hdfs dfs -put ~/book.txt /test``` ``` $ /usr/local/hadoop/bin/hadoop jar '/usr/local/hadoop/share/hadoop/tools/lib/hadoop-streaming-3.3.5.jar' \ -mapper 'python3 mapper.py' \ -file /home/samantha/mapper.py \ -reducer 'python3 reducer.py' \ -file /home/samantha/reducer.py \ -input hdfs:/test/book.txt \ -output hdfs:/user/samantha/result ``` 註: 部分筆記參考[同學的](https://hackmd.io/@WVuRUYslRZm9PNDSsjjDiA/rJ9ry_Xr3) ###### tags: `hadoop` `standalone` `pseudo distributed`
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up