前言
寒假没事做,博客好久没更新了,这次写个Hadoop3.0版本的完全分布式集群的安装
开始安装
安装Hadoop3.0和之前的安装Hadoop集群差别不大,Hadoop3.x和Hadoop2.x的区别大家可以看下这篇文章https://blog.csdn.net/c36qUCnS2zuqF6/article/details/82111579
前期环境配置不多写.前面的文章中写过太多了.
本文直接从配置Hadoop开始,hadoop-3.1.3和jdk1.8.0_191都放在/usr/local/src下
系统说明
操作环境 |
主机名 |
IP地址 |
jdk |
hadoop版本 |
|
centos7.0 |
master |
192.168.128.180 |
jdk1.8.0_191 |
hadoop-3.1.3 |
|
|
slave1 |
192.168.128.181 |
|
|
|
|
slave2 |
192.168.128.182 |
|
|
|
修改配置文件
core-site.xml
<property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property>
<property> <name>hadoop.tmp.dir</name> <value>/usr/local/src/hadoop-3.1.3/tmp</value> </property>
|
workers
export HDFS_NAMENODE_USER=root export HDFS_DATANODE_USER=root export HDFS_JOURNALNODE_USER=root export YARN_RESOURCEMANAGER_USER=root export YARN_NODEMANAGER_USER=root export HDFS_SECONDARYNAMENODE_USER=root export JAVA_HOME=/usr/local/src/jdk1.8.0_191
|
hdfs-site.xml
<property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/src/hadoop-3.1.3/tmp/name</value> </property> <property> <name>dfs.namenode.data.dir</name> <value>/usr/local/src/hadoop-3.1.3/tmp/data</value> </property>
<property> <name>dfs.namenode.secondary.http-address</name> <value>master:9001</value> </property> <property> <name>dfs.http.address</name> <value>master:50070</value> </property>
|
mapred-site.xml
<property> <name>mapreduce.framework.name</name> <value>yarn</value> </property>
<property> <name>mapred.job.tracker.http.address</name> <value>master:50030</value> </property> <property> <name>mapred.task.tracker.http.address</name> <value>master:50060</value> </property>
<property> <name>mapreduce.application.classpath</name> <value> /usr/local/src/hadoop-3.1.3/etc/hadoop, /usr/local/src/hadoop-3.1.3/share/hadoop/common/*, /usr/local/src/hadoop-3.1.3/share/hadoop/common/lib/*, /usr/local/src/hadoop-3.1.3/share/hadoop/hdfs/*, /usr/local/src/hadoop-3.1.3/share/hadoop/hdfs/lib/*, /usr/local/src/hadoop-3.1.3/share/hadoop/mapreduce/*, /usr/local/src/hadoop-3.1.3/share/hadoop/mapreduce/lib/*, /usr/local/src/hadoop-3.1.3/share/hadoop/yarn/*, /usr/local/src/hadoop-3.1.3/share/hadoop/yarn/lib/* </value> </property>
|
yarn-site.xml
<property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>master:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property>
|
测试
当配置完这6个配置文件后,我们完成分发后,保证三台集群上的hadoop文件都已经修改完后,进行hadoop集群的格式化
[root@master hadoop-3.1.3]# bin/hadoop namenode -format
|
当格式化成功后,集群就能启动了
启动命令
[root@master hadoop-3.1.3]# sbin/start-all.sh
|
hadoop3.x的启动提示有所变化
正常启动后,三台集群上的jps节点如下:
之后我们看下web端的页面
http://192.168.128.180:50070/
http://192.168.128.180:8088/
web端的显示与hadoop2.x的页面有所不同,到这里hadoop3.x的安装也就完成了,谢谢大家的阅读。