# Hadoop ``` sudo apt-get install ssh ``` ``` sudo apt-get install pdsh ``` ## openjdk ``` sudo apt-get install default-jre ``` ``` sudo apt-get install default-jdk ``` ## oracle ``` sudo add-apt-repository ppa:webupd8team/java ``` ``` sudo apt-get update ``` ###### https://www.oracle.com/java/technologies/javase/javase-jdk8-downloads.html 載要的version (hadoop 3.3.1支援 java 8) 要建帳號才能載 載完執行: ``` sudo su tar zxvf jdk-8u301-linux-x64.tar.gz -C ../../../usr/lib/jvm/ cd ../../../ update-alternatives --install "/usr/bin/java" "java" "/usr/lib/jvm/jdk1.8.0_301/bin/java" 1 ``` 再來用以下選java版本: (會列出來選數字就好) ``` update-alternatives --config java exit java -version ``` VM跳出記憶體要爆了記得改多一點 或用下面把不要的刪掉 ``` sudo rm -rf /usr/lib/jvm/* ``` ## hadoop ``` sudo tar zxvf hadoop-3.3.1.tar.gz ``` 進到 ``` hadoop-3.3.1/etc/hadoop/hadoop-env.sh ``` 加入: ``` # set to the root of your Java installation export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_301 ``` (JAVA_HOME是剛放的JDK資料夾位置) 接著再hadoop-3.3.1位置執行: ``` bin/hadoop ``` 若跳出文件就是成功了 ### Standalone Operation ``` mkdir input cp etc/hadoop/*.xml input bin/hadoop jar share/hadoop/mapreduce/hadoop-mapreduce-examples-3.3.1.jar grep input output 'dfs[a-z.]+' cat output/* ```  去output資料夾會有_SUCCESS檔案  ### Pseudo-Distributed Operation etc/hadoop/core-site.xml: ``` <configuration> <property> <name>fs.defaultFS</name> <value>hdfs://localhost:9000</value> </property> </configuration> ``` etc/hadoop/hdfs-site.xml: ``` <configuration> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration> ``` #### 設定SSH 先執行: ``` ssh-keygen -t rsa -P '' -f ~/.ssh/id_rsa cat ~/.ssh/id_rsa.pub >> ~/.ssh/authorized_keys chmod 0600 ~/.ssh/authorized_keys ``` 有可能Permission Deny 到/etc/ssh/sshd_config: ``` PermitRootLogin yes ``` restart更新: ``` sudo service sshd restart ``` 接著確認ssh: ``` ssh localhost ``` 會跳出"welcome to Ubuntu...." #### 設定hadoop環境變數 執行: ``` bin/hdfs namenode -format sbin/start-dfs.sh ``` 會跳出以下畫面:  在 etc/hadoop/hadoop-env.sh 加入: ``` export HDFS_DATANODE_USER=<用戶名> export HDFS_NAMENODE_USER=<用戶名> export HDFS_SECONDARYNAMENODE_USER=<用戶名> export YARN_RESOURCEMANAGER_USER=<用戶名> export YARN_NODEMANAGER_USER=<用戶名> ``` CMD再次執行: ``` bin/hdfs namenode -format sbin/start-dfs.sh ``` 可用以下指令看有沒有成功: ``` jps ``` 會顯示:  或去 localhost:9870 
×
Sign in
Email
Password
Forgot password
or
By clicking below, you agree to our
terms of service
.
Sign in via Facebook
Sign in via Twitter
Sign in via GitHub
Sign in via Dropbox
Sign in with Wallet
Wallet (
)
Connect another wallet
New to HackMD?
Sign up