--- title: Apache Spark 實作紀錄 tags: Apache, Spark description: Apache Spark 實作記錄 --- # Apache Spark 實作記錄 需先完成 hadoop 環境設置[**前一篇連結**](https://hackmd.io/24db-1beRxqLHPFXHKXzzg),在 HDFS 架構疊上 Apache Spark 環境。 ## Apache Spark 架設 先切換至 haoop 使用者,之後下載 scala ``` $ su hadoop $ sudo apt-get install scala ``` 下載 spark-3.2.0 並解壓縮至 /usr/local 下 ``` $ cd /usr/local/ $ wget https://dlcdn.apache.org/spark/spark-3.2.0/spark-3.2.0-bin-hadoop2.7.tgz $ tar -xvf spark-3.2.0-bin-hadoop2.7.tgz $ mv /usr/local/spark-3.2.0-bin-hadoop2.7 /usr/local/spark $ chown -R hadoop:hadoop /usr/local/spark ``` 修改環境變數 ``` $ nano ~/.bashrc ``` 加入下面這一行 ``` SPARK_HOME=/usr/local/spark ``` 啟用環境變數 ``` $ source ~/.bashrc ``` 複製並建立一份spark-env腳本 ``` $ cp /usr/local/spark/conf/spark-env.sh.template /usr/local/spark/conf/spark-env.sh $ nano /usr/local/spark/conf/spark-env.sh ``` 編輯spark-env腳本,加入下面幾行 ``` export PYSPARK_PYTHON=python3 export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop export SCALA_HOME=/usr/share/scala ``` 複製並建立一份 workers 腳本,並且編輯 ``` $ cp /usr/local/spark/conf/workers.template /usr/local/spark/conf/workers $ nano /usr/local/spark/conf/workers ``` 編輯 workers 腳本,加入下面幾行 ``` master slave01 slave02 slave03 ``` 複製 spark檔案 到其他機器上 ``` $ scp -r /usr/local/spark slave01:/usr/local/ $ scp -r /usr/local/spark slave02:/usr/local/ $ scp -r /usr/local/spark slave03:/usr/local/ ``` 啟動 Spark ``` $ cd /usr/local/spark/sbin $ start-all.sh ``` 成功會顯示  跑 pi 測試一下Spark ``` $ cd $SPARK_HOME $ bin/spark-submit --class org.apache.spark.examples.SparkPi \ --master yarn \ --deploy-mode cluster \ --driver-memory 1g \ --executor-memory 1g \ --queue default \ /usr/local/spark/examples/src/main/python/pi.py 100 ```  #### 測試 spark 上傳 hello.txt 文件至 hdfs,後續再進行 spark 流式資料測試 ``` $ cd /usr/local/hadoop/ $ nano hello.txt ``` 新增 hello.txt 文件內容並儲存 ``` hello xm hello sir java c python vb java c++ go php erlang java ``` 將 hello.txt 檔案從本地上傳至 hdfs ``` $ bin/hadoop fs -put hello.txt / ``` 進入 pyspark shell 測試 ``` $ cd $SPARK_HOME $ ./bin/pyspark $ textFile = spark.read.text("/hello.txt") $ textFile.count() ```  #### pyspark測試 安裝 pyspark 套件,安裝完畢進入 python3 shell ``` $ pip3 install pyspark $ python3 ``` pyspark 軟件測試,在 python shell 中測試 spark 流式數據處理 ``` import pyspark from pyspark import SparkContext from pyspark import SparkConf conf = SparkConf().setAppName("miniProject").setMaster("local[*]") sc = SparkContext.getOrCreate(conf) # 讀取hello.txt文件,路徑在hdfs下 data = sc.textFile("/hello.txt") # .first() 讀取在 rdd 第一行文件數據 data.first() # Output: "hello xm" # .collect() 返回文件所有字串 data.collect() # output [('hdfs://master:9000/hello.txt', 'hello xm\nhello sir\njava c\npython vb\njava c++\ngo php\nerlang java\n')] ``` --- ## 參考 * https://hackmd.io/@JeffWen/bdsevm * https://shark.gitbook.io/hadoop/jia-spark * https://blog.csdn.net/cymy001/article/details/78483723 ## Thank you! :dash: You can find me on - GitHub: https://github.com/shaung08 - Email: a2369875@gmail.com
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up