温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

hadoop0.20.2完全分布式环境搭建

发布时间:2020-08-04 10:38:27 来源:网络 阅读:498 作者:断臂人 栏目:大数据

三台服务器分别配置IP为:

192.168.11.131

192.168.11.132

192.168.11.133


分别配置主机名

master:

# hostnamectl set-hostname master


其它两台分别配置为slave1和slave2


各服务器关闭selinux和防火墙:

# vi /etc/sysconfig/selinux

SELINUX=enforcing --> SELINUX=disabled


# setenforce 0


# systemctl stop firewalld

# systemctl disable firewalld


替换yum源:

[root@master ~]# mkdir apps


上传包

wget-1.14-15.el7.x86_64.rpm


[root@master apps]# rpm -ivh wget-1.14-15.el7.x86_64.rpm


[root@master apps]# cd /etc/yum.repos.d/

[root@master yum.repos.d]# wget  http://mirrors.aliyun.com/repo/Centos-7.repo


[root@master yum.repos.d]# mv Centos-7.repo CentOS-Base.repo


[root@master yum.repos.d]# scp CentOS-Base.repo root@192.168.11.132:/etc/yum.repos.d/


[root@master yum.repos.d]# scp CentOS-Base.repo root@192.168.11.133:/etc/yum.repos.d/


各服务器执行

# yum clean all


# yum makecache


# yum update


ntp时间同步:

master作为ntp服务端,配置如下

# yum install -y ntp


ntpserver:

master作为ntp主服务器修改时间

# date -s "2018-05-27 23:03:30"


# vi /etc/ntp.conf

在注释下添加两行

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

server 127.127.1.0

fudge 127.127.1.0 stratum 11


注释下面

#server 0.centos.pool.ntp.org iburst

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst


# systemctl start ntpd.service


# systemctl enable ntpd.service


slave1和slave2作为ntp客户端,配置如下

# vi /etc/ntp.conf

同样注释下添加两行

#restrict 192.168.1.0 mask 255.255.255.0 nomodify notrap

server 192.168.11.131

fudge 127.127.1.0 stratum 11


四行添加注释

#server 0.centos.pool.ntp.org iburst

#server 1.centos.pool.ntp.org iburst

#server 2.centos.pool.ntp.org iburst

#server 3.centos.pool.ntp.org iburst


# systemctl start ntpd.service


# systemctl enable ntpd.service


同步时间出错

# ntpdate 192.168.11.131

25 Jun 07:39:15 ntpdate[25429]: the NTP socket is in use, exiting


解决:

# lsof -i:123

-bash: lsof: command not found


# yum install -y lsof


# lsof -i:123

COMMAND  PID USER   FD   TYPE DEVICE SIZE/OFF NODE NAME

ntpd    1819  ntp   16u  IPv4  33404      0t0  UDP *:ntp 

ntpd    1819  ntp   17u  IPv6  33405      0t0  UDP *:ntp 

ntpd    1819  ntp   18u  IPv4  33410      0t0  UDP localhost:ntp 

ntpd    1819  ntp   19u  IPv4  33411      0t0  UDP slave1:ntp 

ntpd    1819  ntp   20u  IPv6  33412      0t0  UDP localhost:ntp 

ntpd    1819  ntp   21u  IPv6  33413      0t0  UDP slave1:ntp 


# kill -9 1819


再次更新时间

# ntpdate 192.168.11.131

24 Jun 23:37:27 ntpdate[1848]: step time server 192.168.11.131 offset -28828.363808 sec


# date

Sun Jun 24 23:37:32 CST 2018


useradd:

# groupadd hduser

# useradd -g hduser hduser

# passwd hduser


ssh免密码认证:

所有的节点生成authorized_keys:

# su hduser

$ cd

$ ssh-keygen -t rsa

Generating public/private rsa key pair.

Enter file in which to save the key (/home/hduser/.ssh/id_rsa): 

Created directory '/home/hduser/.ssh'.

Enter passphrase (empty for no passphrase): 

Enter same passphrase again: 

Your identification has been saved in /home/hduser/.ssh/id_rsa.

Your public key has been saved in /home/hduser/.ssh/id_rsa.pub.

The key fingerprint is:

SHA256:KfyLZTsN3U89CbFAoOsrkI9YRz3rdKR4vr/75R1A7eE hduser@master

The key's randomart image is:

+---[RSA 2048]----+

|         .o.     |

|        .  . ..  |

|      ..    ..oo |

|     o o.o  .oo .|

|    o +.S. . ..E.|

|   + o.B... . oo.|

|  o = =.=o   + ..|

| . . o *oo. o o .|

|      oo==+. . . |

+----[SHA256]-----+

$ cd .ssh/

$ cp id_rsa.pub authorized_keys


所有节点互相认证:

master:

[hduser@master .ssh]$ ssh-copy-id -i id_rsa.pub hduser@slave1

[hduser@master .ssh]$ ssh-copy-id -i id_rsa.pub hduser@slave2


验证:

[hduser@master .ssh]$ ssh slave1

Last failed login: Wed Jun 27 04:55:44 CST 2018 from 192.168.11.131 on ssh:notty

There was 1 failed login attempt since the last successful login.

Last login: Wed Jun 27 04:50:05 2018

[hduser@slave1 ~]$ exit

logout

Connection to slave1 closed.

[hduser@master .ssh]$ ssh slave2

Last login: Wed Jun 27 04:51:53 2018

[hduser@slave2 ~]$ 


slave1:

[hduser@slave1 .ssh]$ ssh-copy-id -i id_rsa.pub hduser@master

[hduser@slave1 .ssh]$ ssh-copy-id -i id_rsa.pub hduser@slave2


slave2:

[hduser@slave2 .ssh]$ ssh-copy-id -i id_rsa.pub hduser@master

[hduser@slave2 .ssh]$ ssh-copy-id -i id_rsa.pub hduser@slave1


上传包:

[hduser@master ~]$ cd src

[hduser@master src]$ ll

total 356128

-rw-r--r-- 1 root root  44575568 Jun 16 17:24 hadoop-0.20.2.tar.gz

-rw-r--r-- 1 root root 288430080 Mar 16  2016 jdk1.7.0_79.tar


配置jdk:

[hduser@master src]$ tar -xf jdk1.7.0_79.tar -C ..

[hduser@master src]$ cd ..

[hduser@master ~]$ vi .bashrc

添加

export JAVA_HOME=/home/hadoop/jdk1.7.0_79

export JRE_HOME=$JAVA_HOME/jre

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib/rt.jar

export PATH=$PATH:$JAVA_HOME/bin


[hduser@master ~]$ source .bashrc 


[hduser@master ~]$ java -version

java version "1.7.0_79"

Java(TM) SE Runtime Environment (build 1.7.0_79-b15)

Java HotSpot(TM) 64-Bit Server VM (build 24.79-b02, mixed mode)


其它两节点配置同上


配置hadoop:

各节点解压

tar -zxf hadoop-0.20.2.tar.gz -C ..


master:

[hduser@master conf]$ pwd

/home/hduser/hadoop-0.20.2/conf


[hduser@master conf]$ vi hadoop-env.sh 

export JAVA_HOME=/home/hduser/jdk1.7.0_79


[hduser@master conf]$ vi core-site.xml 

<configuration>

        <property>

                <name>fs.default.name</name>

                <value>hdfs://master:9000</value>

        </property>

</configuration>


[hduser@master conf]$ vi hdfs-site.xml 

<configuration>

        <property>

                <name>dfs.replication</name>

                <value>2</value>

        </property>

</configuration>


[hduser@master conf]$ vi mapred-site.xml

<configuration>

        <property>

                <name>mapred.job.tracker</name>

                <value>master:9001</value>

        </property>

</configuration>


[hduser@master conf]$ vi masters 

#localhost

master


[hduser@master conf]$ vi slaves 

#localhost

slave1

slave2


拷贝配置文件到其它两个节点

[hduser@master conf]$ scp hadoop-env.sh slave1:~/hadoop-0.20.2/conf/   

[hduser@master conf]$ scp core-site.xml slave1:~/hadoop-0.20.2/conf/   

[hduser@master conf]$ scp hdfs-site.xml slave1:~/hadoop-0.20.2/conf/    

[hduser@master conf]$ scp mapred-site.xml slave1:~/hadoop-0.20.2/conf/    

[hduser@master conf]$ scp masters slave1:~/hadoop-0.20.2/conf/

[hduser@master conf]$ scp slaves slave1:~/hadoop-0.20.2/conf/  

[hduser@master conf]$ scp hadoop-env.sh slave2:~/hadoop-0.20.2/conf/

[hduser@master conf]$ scp core-site.xml slave2:~/hadoop-0.20.2/conf/    

[hduser@master conf]$ scp hdfs-site.xml slave2:~/hadoop-0.20.2/conf/ 

[hduser@master conf]$ scp mapred-site.xml slave2:~/hadoop-0.20.2/conf/    

[hduser@master conf]$ scp masters slave2:~/hadoop-0.20.2/conf/   

[hduser@master conf]$ scp slaves slave2:~/hadoop-0.20.2/conf/


格式化文件系统

[hduser@master conf]$ cd ../bin

[hduser@master bin]$ ./hadoop namenode -format


启动服务

[hduser@master bin]$ ./start-all.sh 


[hduser@master bin]$ jps

1681 JobTracker

1780 Jps

1618 SecondaryNameNode

1480 NameNode


[hduser@slave1 conf]$ jps

1544 Jps

1403 DataNode

1483 TaskTracker


[hduser@slave2 conf]$ jps

1494 TaskTracker

1414 DataNode

1555 Jps


向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI