温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

搭建hadood2.8.0集群开发环境

发布时间:2020-02-26 12:09:41 来源:网络 阅读:158 作者:苍天火云 栏目:大数据

目标:

搭建hadoop+hbase+zoopkeer+hive 开发环境

安装环境:

1、centeros 192.168.1.101

2、 centeros 192.168.1.102

开发环境:

window +eclipse

一、安装hadoop集群

1、配置hosts

#vi /etc/hosts

192.168.1.101 master

192.168.1.101 slave1

2、关闭防火墙:

systemctl status firewalld.service #检查防火墙状态

systemctl stop firewalld.service #关闭防火墙

systemctl disable firewalld.service #禁止开机启动防火墙

3、配置ssh 无密码访问

ssh-keygen -t rsa #生成密钥

slave1

cp ~/.ssh/id_rsa.pub ~/.ssh/slave1.id_rsa.pub

#scp ~/.ssh/slave1.id_rsa.pub master:~/.ssh

master:

cd ~/.ssh

cat id_rsa.pub >> authorized_keys

cat slave1.id_rsa.pub >>authorized_keys

scp authorized_keys slave1:~/.ssh

测试:ssh master

   ssh slave1

4、安装hadoop

tar -zxvf hadoop-2.8.0.tar.gz

mkdir /usr/hadoop-2.8.0/tmp

mkdir /usr/hadoop-2.8.0/logs

mkdir /usr/hadoop-2.8.0/hdf

mkdir/usr/hadoop-2.8.0/hdf/data

mkdir /usr/hadoop-2.8.0/hdf/name

修改配置文件

修改hadoop-env.sh 增加 export JAVA_HOME

修改mapred-env.sh 增加 export JAVA_HOME

修改yarn-env.sh 增加 export JAVA_HOME

修改 core-site.xml :

<configuration>
<property>
<name>fs.default.name</name>
<value>hdfs://master:9000</value>
<description>HDFS address </description>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/usr/hadoop/hadoop-2.8.0/tmp</value>
<description>namenode tmp file </description>
</property>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9000</value>
<description>HDFS address </description>
</property>

修改mapred.site.xml

<configuration>
<property>
<name>mapred.job.tracker</name>
<value>http://master:9001</value>
</property>
<property>
<name>mapreduce.framework.name</name>
<value>yarn</value>
</property>
<property>
<name>mapred.system.dir</name>
<value>/usr/hadoop/hadoop-2.8.0/mapred/system</value>
</property>
<property>
<name>mapred.local.dir</name>
<value>/usr/hadoop/hadoop-2.8.0/mapred/local</value>
<final>true</final>
</property>
<property>
<name>mapreduce.jobhistory.address</name>
<value>master:10020</value>
</property>
<property>
<name>mapreduce.jobhistory.webapp.address</name>
<value>master:19888</value>
</property>
</configuration>

修改 yarn-site.xml

<property>
<name>yarn.nodemanager.aux-services</name>
<value>mapreduce_shuffle</value>
</property>
<property>
<name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name>
<value>org.apache.mapred.ShuffleHandler</value>
</property>
<property>
<name>yarn.resourcemanager.address</name>
<value>master:8032</value>
</property>
<property>
<name>yarn.resourcemanager.scheduler.address</name>
<value>master:8030</value>
</property>
<property>
<name>yarn.resourcemanager.resource-tracker.address</name>
<value>master:8031</value>
</property>
<property>
<name>yarn.resourcemanager.admin.address</name>
<value>master:8033</value>
</property>
<property>
<name>yarn.resourcemanager.webapp.address</name>
<value>master:8088</value>
</property>

修改hdfs-site.xml

<configuration>
<property>
<name>dfs.name.dir</name>
<value>/usr/hadoop/hadoop-2.8.0/hdfs/name</value>
<description>namenode </description>
</property>

<property>
<name>dfs.data.dir</name>
<value>/usr/hadoop/hadoop-2.8.0/hdfs/data</value>
<description>datanode脡</description>
</property>
<property>
<name>dfs.replication</name>
<value>3</value>
<description>num</description>
</property>
<property>
<name>dfs.permissions</name>
<value>false</value>
<description>
true or false
</description>
</property>
</configuration>

建立slaves文件并加入slave1

复制文件到slave1

scp -r ~/usr/hadoop slave1:~/usr

加入hadoop bin 到环境变量中

格式化namenode

hadoop namenode -format

启动hadoop

./start-all.sh

检查服务启动情况 :jps

master :包含ResourceManager、SecondaryNameNode、NameNode

slave1 :包含datanode NodeManager

下次再说zoopker +hbase

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI