--- tags: Spark --- # Spark | 主機IP | 主機名稱 | 備註 | | -------- | -------- | ---- | | 140.113.72.227:22|master | Neil account:User| | 140.113.72.205:22|data1 | Wei **ssh-keygen -R 192.168.2.151** **ssh-keygen -f "/home/hduser/.ssh" -R 140.113.72.227** 可移除連線時的warning # HW5-spark ![](https://i.imgur.com/wTenjIZ.png) 經測試後在maxDepth=14,maxBins=9 可以得到最佳準確度0.76 precision = 0.696 recall = 0.586 ## 結果討論 好險有書,照著他的版本跟步驟走基本上沒什麼大問題,但若自己想要以不同的方式來做常常會踩到大坑,如:用最新版的ubuntu會有ipv6轉ipv4造成hadoop無法完全啟動的問題、想以anaconda取代ipython的開啟方法結果把環境莫名用壞無法跑spark等... 花了好久的時間才能夠完成這份作業,好險最後都照著書走才能完成,真的謝天謝地。 ### 以下為步驟與截圖 ![](https://i.imgur.com/GlELQcg.png) # Hadoop Single Node Cluster 安裝 ## <font color=red>VB用系統管理員執行</font> ## 1. JDK 安裝 1. 在Linux虛擬機中,打開終端機輸入以下指令 ```linux shell java -version ``` 如果出現 "headless"的字樣代表尚未安裝, 若未安裝則往以下步驟走。 2. 更新apt套件,並安裝JDK ```linux shell sudo apt-get update sudo apt-get install default-jdk ``` 輸入完以上兩行指令並安裝完成後,再次輸入 ```linux shell java -version ``` 若有看到版本代表安裝完成 3. 查看java路徑 ```linux update-alternatives --display java ``` ## 2. 設定SSH 1. 安裝ssh ```linux sudo apt-get install ssh ``` 2. 安裝rsync ```linux sudo apt-get install rsync ``` 3. 以DSA加密方式產生金鑰 ```linux ssh-keygen -t dsa -P '' -f ~/.ssh/id_dsa ``` 4. 將產生的金鑰匙放入授權檔案中 ```linux cat ~/.ssh/id_dsa.pub >> ~/.ssh/authorized_keys ``` ## 3. 安裝Hadoop 1. 下載 ```linux wget https://archive.apache.org/dist/hadoop/core/hadoop-2.6.4/hadoop-2.6.4.tar.gz ``` 2. 解壓縮 ```linux sudo tar -zxvf hadoop-2.6.4.tar.gz ``` 3. 移動hadoop到 /usr/local ```linux sudo mv hadoop-2.6.4 /usr/local/hadoop ``` ## 4. 設定Hadoop環境變數 0. 先安裝vim ```linux sudo apt-get install vim ``` 1. 進入~/.bashrc 檔案 1-1 ```linux sudo vim ~/.bashrc ``` 1-2 按鍵盤的 "i" 進入編輯模式 1-3 將以下內容加在不會碰到shell script的地方,如:最下面 ```linux export JAVA_HOME=/usr/lib/jvm/java-7-openjdk-amd64 #根據自己java bin的位置做變動 export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:$HADOOP_HOME/bin export PATH=$PATH:$HADOOP_HOME/sbin export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export YARN_HOME=$HADOOP_HOME export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH ``` ![](https://i.imgur.com/U5Qmh5Q.png) ```linux source ~/.bashrc ``` ## 5. 修改Hadoop設定檔 1. 編輯hadoop-env.sh ```linux sudo vim /usr/local/hadoop/etc/hadoop/hadoop-env.sh ``` 2. 修改JAVA_HOME的位置 改為自己的JAVA_HOME位置 更改前: ![](https://i.imgur.com/tfZSpZt.png) 更改後: ![](https://i.imgur.com/Ykw7GAp.png) 3. 設定core-site-xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/core-site.xml ``` 在<configuration></configuration>間輸入 ```xml <property> <name>fs.default.name</name> <value>hdfs://localhost:9000</value> </property> ``` ![](https://i.imgur.com/fO75ZnQ.png) 4. 設定yarn-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/yarn-site.xml ``` 在<configuration></configuration>間輸入 ```xml <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> ``` ![](https://i.imgur.com/NjyQtGJ.png) 5. 設定mapred-site.xml ```linux sudo cp /usr/local/hadoop/etc/hadoop/mapred-site.xml.template /usr/local/hadoop/etc/hadoop/mapred-site.xml sudo vim /usr/local/hadoop/etc/hadoop/mapred-site.xml ``` 在<configuration></configuration>間輸入 ```xml <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> ``` ![](https://i.imgur.com/9Ykg58l.png) 6. 設定hdfs-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml ``` 在<configuration></configuration>間輸入 ```xml <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value> file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value> </property> <property> <name>dfs.datanode.data.dir</name> <value> file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value> </property> ``` ![](https://i.imgur.com/HWER8DW.png) ## 6. 建立格式化HDFS目錄 1. 建立NameNode資料儲存目錄 ```linux sudo mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode ``` 2. 建立DataNode資料儲存目錄 ```linux sudo mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode ``` 3. 更改hadoop 目錄擁有者 ```linux #ubuntu 14.04版本請輸入 sudo chown hduser:hduser -R /usr/local/hadoop #新版ubuntu 請輸入 #sudo chown -R neil /usr/local/hadoop # neil的部分根據你的使用者名稱更改 #sudo chmod 777 -R /usr/local/hadoop ``` 4. HDFS格式化 ```linux hadoop namenode -format ``` 5. 啟動HDFS ```linux start-dfs.sh start-yarn.sh start-all.sh ``` 6. 打開瀏覽器 進入 http://localhost:8088/ http://localhost:50070/ # Hadoop Mulit Node Cluster 安裝 ## 再製Single Node Cluster為data1 參考書頁088~091 ## 設定data1 1. 編輯interfaces ```linux sudo vim /etc/network/interfaces ``` 在檔案內下方新增 ```linux #ubuntu 14.04版本請輸入 #NAT interface auto eth0 iface eth0 inet dhcp # host noly interface auto eth1 iface eth1 inet static address 192.168.56.101 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255 # 新版ubuntu的版本 #NAT interface auto enp0s3 iface enp0s3 inet dhcp # host noly interface auto enp0s8 iface enp0s8 inet static address 192.168.56.101 netmask 255.255.255.0 network 192.168.56.0 broadcast 192.168.56.255 ``` ![](https://i.imgur.com/MPNlquc.png) 2. 編輯hostname主機名稱 ```linux sudo vim /etc/hostname ``` 內容更改為data1 3. <font color="red">編輯host每一台主機對應ip</font> ```linux sudo vim /etc/hosts ``` 輸入 ```linux 192.168.56.100 master 192.168.56.101 data1 192.168.56.102 data2 192.168.56.103 data3 ``` ![](https://i.imgur.com/nWOKhw8.png) 4. 編輯core-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/core-site.xml ``` 把localhost 改為 master ![](https://i.imgur.com/06byrBw.png) 5. 編輯yarn-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/yarn-site.xml ``` 在<configuration></configuration>內新增 ```xml <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8025</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8050</value> </property> ``` ![](https://i.imgur.com/evXpEhk.png) 6. 編輯mapred-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/mapred-site.xml ``` 修改<property>...</property>為 ```xml <property> <name>mapred.job.tracker</name> <value>master:54311</value> </property> ``` 7. 編輯hdfs-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml ``` 修改<configuration>...</configuration>為 ```xml <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.datanode.data.dir</name> <value> file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value> </property> ``` 8. 重啟data1 9. 輸入ifconfig 確認ip (參考書頁097) ## 重製data1至data2、data3、master ## 設定data2 1. 重複"**設定data1**的1.、2.",但address改為 192.168.56.102,hostname改為data2 2. 重開機 ## 設定data3 1. 重複"**設定data1**的1.、2.",但address改為 192.168.56.103,hostname改為data3 2. 重開機 ## 設定master 1. 重複"**設定data1**的1.、2.",但address改為 192.168.56.100,hostname改為master 2. 重開機 3. 設定hdfs-site.xml ```linux sudo vim /usr/local/hadoop/etc/hadoop/hdfs-site.xml ``` 在<configuration></configuration>間輸入 ```xml <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.namenode.name.dir</name> <value> file:/usr/local/hadoop/hadoop_data/hdfs/namenode</value> </property> ``` 2. 編輯master檔案 ```linux sudo vim /usr/local/hadoop/etc/hadoop/masters ``` 輸入 ``` master ``` 3. 編輯slaves檔案 ```linux sudo vim /usr/local/hadoop/etc/hadoop/slaves ``` 輸入 ``` data1 data2 data3 ``` 4. ssh 連線至data1 ```linux ssh data1 ``` 移除hdfs 所有目錄 ```linux sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs ``` 建立DataNode儲存目錄 ```linux mkdir -p /usr/local/hadoop/hadoop_data/hdfs/datanode ``` 將目錄擁有權限全開 ```linux #ubuntu 14.04版本請輸入 sudo chown hduser:hduser -R /usr/local/hadoop #新版ubuntu 請輸入 #sudo chown -R neil /usr/local/hadoop # neil的部分根據你的使用者名稱更改 #sudo chmod 777 -R /usr/local/hadoop ``` 中斷連線 ``` exit ``` ssh 連線至data2、data3 重複以上動作 5. 在master主機重建NameNode 移除hdfs 所有目錄 ```linux sudo rm -rf /usr/local/hadoop/hadoop_data/hdfs ``` 建立NameNode儲存目錄 ```linux mkdir -p /usr/local/hadoop/hadoop_data/hdfs/namenode ``` 將目錄擁有權限全開 ```linux #ubuntu 14.04版本請輸入 sudo chown hduser:hduser -R /usr/local/hadoop #新版ubuntu 請輸入 #sudo chown -R neil /usr/local/hadoop # neil的部分根據你的使用者名稱更改 #sudo chmod 777 -R /usr/local/hadoop ``` 格式化NameNode HDFS ```linux hadoop namenode -format ``` 停止dfs & yarn ``` stop-all.sh ``` 6. 啟動Hadoop Mulit Node Cluster ``` start-dfs.sh start-yarn.sh ``` 7. 進入 http://localhost:8088/cluster/nodes 查看 ![](https://i.imgur.com/sAmFK3n.png) 8. 進入 http://localhost:50070/ 查看 ![](https://i.imgur.com/aSsM0YM.png) # MapReduce 跟著書第7章走 會用到的指令 ```linux mkdir -p ~/wordcount/input sudo vim ~/.bashrc # 新增export # export PATH=${JAVA_HOME}/bin:${PATH} # export HADOOP_CLASSPATH=${JAVA_HOME}/lib/tools.jar source ~/.bashrc hadoop com.sun.tools.javac.Main WordCount.java jar cf wc.jar WordCount*.class cp /usr/local/hadoop/LICENSE.txt ~/wordcount/input/ start-all.sh hadoop fs -mkdir -p /user/hduser/wordcount/input hadoop fs -copyFromLocal LICENSE.txt /user/hduser/wordcount/input hadoop jar wc.jar WordCount /user/hduser/wordcount/input/LICENSE.txt /user/hduser/wordcount/output hadoop fs -ls /user/hduser/wordcount/output hadoop fs -cat /user/hduser/wordcount/output/part-r-00000|more ``` # Spark 1. 安裝scala並解壓縮 ```linux wget https://downloads.lightbend.com/scala/2.11.6/scala-2.11.6.tgz tar xvf scala-2.11.6.tgz ``` 2. 搬移scala到/usr/local ```linux sudo mv scala-2.11.6 /usr/local/scala ``` 3. 更新bashrc ```linux sudo vim ~/.bashrc ``` 在底下新增 ```linux export SCALA_HOME=/usr/local/scala export PATH=$PATH:$SCALA_HOME/bin ``` 使bashrc生效 ```linux source ~/.bashrc ``` 4. 安裝Spark 2.0 ```linux wget https://archive.apache.org/dist/spark/spark-2.0.0/spark-2.0.0-bin-hadoop2.6.tgz tar zxf spark-2.0.0-bin-hadoop2.6.tgz ``` 5. 搬移spark-2.0.0-bin-hadoop2.6到/usr/local/spark ```linux sudo mv spark-2.0.0-bin-hadoop2.6 /usr/local/spark ``` 6. 更新bashrc ```linux sudo vim ~/.bashrc ``` 在底下新增 ```linux export SPARK_HOME=/usr/local/spark export PATH=$PATH:$SPARK_HOME/bin ``` 使bashrc生效 ```linux source ~/.bashrc ``` 7. 切換至spark設定檔案目錄 ```linux cd /usr/local/spark/conf cp log4j.properties.template log4j.properties sudo vim log4j.properties ``` 把INFO改成WARN 8. 建立測試文檔 ```linux sudo mkdir ~/wordcount cd ~/wordcount/ sudo mkdir input sudo cp /usr/local/hadoop/LICENSE.txt ~/wordcount/input hadoop fs -mkdir -p /user/hduser/wordcount/ cd ~/wordcount/input hadoop fs -copyFromLocal LICENSE.txt /user/hduser/wordcount/input ``` 9. 在Spark standalone cluster執行環境 ```linux cp /usr/local/spark/conf/spark-env.sh.template /usr/local/spark//conf/spark-env.sh sudo vim /usr/local/spark/conf/spark-env.sh ``` 輸入 ```linux export SPARK_MASTER_IP=master export SPARK_WORKER_CORES=1 export SPARK_WORKER_MEMORY=512 export SPARK_EXECUTOR_INSTANCES=4 ``` 連線至data1 data2 data3 做以下更動 ```linux sudo rm -R /usr/local/spark sudo mkdir /usr/local/spark sudo chown hduser:hduser /usr/local/spark ``` 將master的spark複製過去 ```linux sudo scp -r /usr/local/spark hduser@data1:/usr/local sudo scp -r /usr/local/spark hduser@data2:/usr/local sudo scp -r /usr/local/spark hduser@data3:/usr/local ``` Spark UI 介面 http://master:8080/ ![](https://i.imgur.com/2VFGtIp.png) # IPython 1. master安裝anaconda ```linux sudo wget https://repo.anaconda.com/archive/Anaconda2-2.5.0-Linux-x86_64.sh bash Anaconda2-2.5.0-Linux-x86_64.sh -b ``` 2. master更新bashrc ```linux sudo vim ~/.bashrc ``` 在底下新增 ```linux export PATH=/home/hduser/anaconda2/bin:$PATH export ANACONDA_PATH=/home/hduser/anaconda2 export PYSPARK_DRIVER_PYTHON=$ANACONDA_PATH/bin/ipython export PYSPARK_PYTHON=$ANACONDA_PATH/bin/python ``` 使bashrc生效 ```linux source ~/.bashrc ``` 3. 對data1 data2 data3做 1.2. 4. 建立ipynotebook ```linux mkdir -p ~/pythonwork/ipynotebook cd ~/pythonwork/ipynotebook ``` 執行pyspark ```linux PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" pyspark ``` 5. 在hadoop yarn下執行ipython notebook ```linux PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop MASTER=yarn-client pyspark PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop pyspark --master yarn --deploy-mode client ``` 6. 在Spark Stand Alone下執行ipython notebook ```linux PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=spark://master:7077 pyspark --num-executors 1 --total-executor-cores 2 --executor-memory 512m ``` 7. 在local模式啟動 ```local PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=local[*] pyspark ``` # Spark + IPython 二元樹 1. 到https://www.kaggle.com/c/stumbleupon/data 載資料 點選save 2. 複製train.tsv,test.tsv 到專案目錄 並執行 ```linux cp ~/下載/train.tsv ~/pythonwork/PythonProject/data cp ~/下載/test.tsv ~/pythonwork/PythonProject/data cd ~/pythonwork/PythonProject/data hadoop fs -mkdir /user/hduser/data hadoop fs -copyFromLocal *.tsv /user/hduser/data hadoop fs -ls /user/hduser/data/*.tsv cd ~/pythonwork/ipynotebook/ PYSPARK_DRIVER_PYTHON=ipython PYSPARK_DRIVER_PYTHON_OPTS="notebook" MASTER=local[*] pyspark ```