sudo gedit /etc/network/interfaces
or
sudo gedit /etc/networks (testing)
#interfaces(5) file used by ifup(8) abd ifdown(8)
auto lo
iface lo inet loopback
#NAT interface
auto
iface eth0 inet dhcp
auto eth1
iface eth1 inet static
address 192.168.56.100
netmask 255.255.255.0
network 192.168.56.0
broadcast 192.168.56.255
sudo gedit /etc/hostname
data1 or master or blablabla just coose want you want lol.
sudo gedit /etc/hosts
# virtual ip for cluster communication
192.168.56.100 master
192.168.56.101 slave1
192.168.56.102 slave2
sudo gedit /home/pcdm/programs/hadoop-3.2.1/etc/hadoop/core-site.xml
! 記得 如果檔案為空白沒文字請再三確認文件/資料夾是否存在
<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
</property>
<!-- 指定hadoop執行時產生檔案的儲存目錄 -->
<property>
<name>hadoop.tmp.dir</name>
<!--- 可能找不到 要double check 一下--->
<value>/app/hadoop/tmp</value>
</property>
</configuration>
sudo gedit /home/pcdm/programs/hadoop-3.2.1/etc/hadoop/yarn-site.xml
<configuration>
<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.hadoop.mapred.ShuffleHandler</value>
</property>
<!--- ResourceManager resource-tracker/scheduler/address --->
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8025</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8050</value>
</property>
</configuration>
sudo gedit /home/pcdm/programs/hadoop-3.2.1/etc/hadoop/mapred-site.xml
<configuration>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.job.tracker</name>
<value>master:54311</value>
</property>
</configuration>
sudo gedit /home/pcdm/programs/hadoop-3.2.1/etc/hadoop/hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>3</value>
</property>
<property>
<name>dfs.datanode.data.dir</name>
<!--- 可能找不到 要double check 一下--->
<value> file:/usr/local/hadoop/hadoop_data/hdfs/datanode</value>
</property>
</configuration>
$0 < \epsilon_1 < 1$ $0 < \epsilon_2 < 1$ $LCSS_{\delta,\epsilon_1,\epsilon2,t,l}(S_p,S_p)$ denoted as $M(S_p,S_q)$ $M(S_p,S_q) = \cases{ 0, \ \ \ \ if\ \ S_p=\emptyset\ \ or\ \ S_q=\emptyset \ \ local+M(S_p-S_{pn},\ S_q-S_{qn}), \ \ \ \ if\ a\ \le \epsilon_1\
Dec 15, 2021Kafka_QuickStart (include Download) step1: download $ tar -xzf kafka_2.13-2.6.0.tgz $ cd kafka_2.13-2.6.0 step2: (1.)Start "ZooKeeper" Server $ bin/zookeeper-server-start.sh config/zookeeper.properties
Jan 6, 2021Ubuntu20.04 Hadoop3.2.1 Spark3.0.0 https://towardsdatascience.com/assembling-a-personal-data-science-big-data-laboratory-in-a-raspberry-pi-4-or-vms-cluster-e4c5a0473025 https://www.linode.com/docs/databases/hadoop/how-to-install-and-set-up-hadoop-cluster/ https://medium.com/@jootorres_11979/how-to-set-up-a-hadoop-3-2-1-multi-node-cluster-on-ubuntu-18-04-2-nodes-567ca44a3b12
Dec 17, 2020Ubuntu 20.04.1 Hadoop 3.2.1 Spark 3.0.1 -- Single Node(Standalone) [TOC] Ubuntu 20.04.1 LTS Website of Ubuntu Desktop Download Link update sudo apt-get update
Nov 15, 2020or
By clicking below, you agree to our terms of service.
New to HackMD? Sign up