温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

搭建部署Hadoop 之 HDFS

发布时间:2020-07-02 23:13:37 来源:网络 阅读:2225 作者:Xuenqlve 栏目:大数据

HDFS  Hadoop 分布式文件系统


分布式文件系统

分布式文件系统可以有效解决数据的存储和管理难题

– 将固定于某个地点的某个文件系统,扩展到任意多个地点/多个文件系统

– 众多的节点组成一个文件系统网络

– 每个节点可以分布在不同的地点,通过网络进行节点间的通信和数据传输

– 人们在使用分布式文件系统时,无需关心数据是存储在哪个节点上、或者是从哪个节点从获取的,只需要像使用本地文件系统一样管理和存储文件系统中的数据


HDFS 角色及概念

• 是Hadoop体系中数据存储管理的基础。它是一个高度容错的系统,用于在低成本的通用硬件上运行。

• 角色和概念

    – Client

    – Namenode

    – Secondarynode

    – Datanode


• NameNode

    – Master节点,管理HDFS的名称空间和数据块映射信息,配置副本策略,处理所有客户端请求。

• Secondary NameNode

    – 定期合并 fsimage 和fsedits,推送给NameNode

    – 紧急情况下,可辅助恢复NameNode,

• 但Secondary NameNode并非NameNode的热备。

• DataNode

    – 数据存储节点,存储实际的数据

    – 汇报存储信息给NameNode。

• Client

    – 切分文件

    – 访问HDFS

    – 与NameNode交互,获取文件位置信息

    – 与DataNode交互,读取和写入数据。

• Block

    – 每块缺省64MB大小

    – 每块可以多个副本    


搭建部署Hadoop 之 HDFS



搭建部署 HDFS 分布式文件系统

实验环境准备:

# vim /etc/hosts

    .. ..

    192.168.4.1master

    192.168.4.2node1

    192.168.4.3node2

    192.168.4.4node3

# sed -ri  "/Host */aStrictHostKeyChecking no" /etc/ssh/ssh_config

# ssh-keygen

# for i in {1..4} 

> do

> ssh-copy-id 192.168.4.${i}

> done

# for i in {1..4}        //同步本地域名

> do

> rsync -a /etc/hosts 192.168.4.${i}:/etc/hosts

> done

# rm -rf /etc/yum.repos.d/*

# vim /etc/yum.repos.d/yum.repo   //配置网络yum

    [yum]

    name=yum

    baseurl=http://192.168.4.254/rhel7

    gpgcheck=0

# for i in {2..4}

> do

> ssh 192.168.4.${i} "rm -rf /etc/yum.repos.d/*"

> rsync -a /etc/yum.repos.d/yum.repo 192.168.4.${i}:/etc/yum.repos.d/

> done

# for i in {1..4}

> do

> ssh 192.168.4.${i} 'sed -ri "s/^(SELINUX=).*/\1disabled/" /etc/selinux/config ; yum -y remove firewalld' 

> done

//所有机器重启  


搭建完全分布式

系统规划:

主机                                            角色                                                            软件 

192.168.4.1    master             NameNode  SecondaryNameNode        HDFS

192.168.4.2     node1             DataNode                                                    HDFS

192.168.4.3     node2             DataNode                                                    HDFS

192.168.4.4     node3             DataNode                                                    HDFS


在所有系统上安装java 环境和调试工具jtarps

# for i in {1..4}

> do

> ssh 192.168.4.${i} "yum -y install java-1.8.0-openjdk-devel.x86_64"

> done

# which java

/usr/bin/java

# readlink -f /usr/bin/java

/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64/jre/bin/java

安装 hadoop

# tar -xf hadoop-2.7.3.tar.gz

# mv hadoop-2.7.3 /usr/local/hadoop


修改配置

# cd /usr/local/hadoop/

# sed -ri "s;(export JAVA_HOME=).*;\1/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64/jre;" etc/hadoop/hadoop-env.sh

# sed -ri "s;(export HADOOP_CONF_DIR=).*;\1/usr/local/hadoop/etc/hadoop;" etc/hadoop/hadoop-env.sh

# sed -n "25p;33p" etc/hadoop/hadoop-env.sh

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64/jre

export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop

//配置参数说明 网站http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-common/core-default.xml

# vim etc/hadoop/core-site.xml

.. .. 

<configuration>

  <property>

    <name>fs.defaultFS</name>                    //默认的文件系统

    <value>hdfs://master:9000</value>

  </property>

  <property>

    <name>hadoop.tmp.dir</name>                //所有程序存放位置 hadoop根目录

    <value>/var/hadoop</value>

  </property>

</configuration>

//所有机器上创建 根目录

# for i in {1..4}

> do

> ssh 192.168.4.${i} "mkdir /var/hadoop"

> done

//配置参数说明 网站http://hadoop.apache.org/docs/r2.7.5/hadoop-project-dist/hadoop-hdfs/hdfs-default.xml

# vim etc/hadoop/hdfs-site.xml

<configuration>

  <property>

    <name>dfs.namenode.http-address</name>        //配置namenode 地址

    <value>master:50070</value>

  </property>

  <property>

    <name>dfs.namenode.secondary.http-address</name>        //配置 secondarynamenode 地址

    <value>master:50090</value>

  </property>

  <property>

    <name>dfs.replication</name>                //配置数据存储几份

    <value>2</value>

  </property>

</configuration>

# vim etc/hadoop/slaves         //配置去那些主机上寻找 DataNode 

node1

node2 

node3

配置完成以后,把 hadoop 的文件夹拷贝到所有机器

# for i in {2..4}

> do

> rsync -azSH --delete /usr/local/hadoop 192.168.4.${i}:/usr/local/ -e "ssh"

> done


//在 NameNode 下执行格式化 Hadoop

# ./bin/hdfs namenode -format

看见 successfully formatted.   说明 格式化成功了

//在没有报错的情况下 启动集群

# ./sbin/start-dfs.sh   


启动以后分别在 namenode 和 datanode执行命令

# for i in master node{1..3}

> do

> echo $i

> ssh ${i} "jps"

> done

master

4562 SecondaryNameNode

4827 NameNode

5149 Jps

node1

3959 DataNode

4105 Jps

node2

3957 Jps

3803 DataNode

node3

3956 Jps

3803 DataNode


# ./bin/hdfs dfsadmin -report                //查看注册成功的节点 

    Configured Capacity: 160982630400 (149.93 GB)

    Present Capacity: 150644051968 (140.30 GB)

    DFS Remaining: 150644039680 (140.30 GB)

    DFS Used: 12288 (12 KB)

    DFS Used%: 0.00%

    Under replicated blocks: 0

    Blocks with corrupt replicas: 0

    Missing blocks: 0

    Missing blocks (with replication factor 1): 0

    

    -------------------------------------------------

    Live datanodes (3):

    

    Name: 192.168.4.2:50010 (node1)

    Hostname: node1

    Decommission Status : Normal

    Configured Capacity: 53660876800 (49.98 GB)

    DFS Used: 4096 (4 KB)

    Non DFS Used: 3446755328 (3.21 GB)

    DFS Remaining: 50214117376 (46.77 GB)

    DFS Used%: 0.00%

    DFS Remaining%: 93.58%

    Configured Cache Capacity: 0 (0 B)

    Cache Used: 0 (0 B)

    Cache Remaining: 0 (0 B)

    Cache Used%: 100.00%

    Cache Remaining%: 0.00%

    Xceivers: 1

    Last contact: Mon Jan 29 21:17:39 EST 2018

    

    

    Name: 192.168.4.4:50010 (node3)

    Hostname: node3

    Decommission Status : Normal

    Configured Capacity: 53660876800 (49.98 GB)

    DFS Used: 4096 (4 KB)

    Non DFS Used: 3445944320 (3.21 GB)

    DFS Remaining: 50214928384 (46.77 GB)

    DFS Used%: 0.00%

    DFS Remaining%: 93.58%

    Configured Cache Capacity: 0 (0 B)

    Cache Used: 0 (0 B)

    Cache Remaining: 0 (0 B)

    Cache Used%: 100.00%

    Cache Remaining%: 0.00%

    Xceivers: 1

    Last contact: Mon Jan 29 21:17:39 EST 2018

    

    

    Name: 192.168.4.3:50010 (node2)

    Hostname: node2

    Decommission Status : Normal

    Configured Capacity: 53660876800 (49.98 GB)

    DFS Used: 4096 (4 KB)

    Non DFS Used: 3445878784 (3.21 GB)

    DFS Remaining: 50214993920 (46.77 GB)

    DFS Used%: 0.00%

    DFS Remaining%: 93.58%

    Configured Cache Capacity: 0 (0 B)

    Cache Used: 0 (0 B)

    Cache Remaining: 0 (0 B)

    Cache Used%: 100.00%

    Cache Remaining%: 0.00%

    Xceivers: 1

    Last contact: Mon Jan 29 21:17:39 EST 2018


namenode

搭建部署Hadoop 之 HDFS

secondarynamenode

搭建部署Hadoop 之 HDFS

datanode

搭建部署Hadoop 之 HDFS





HDFS 基本使用

HDFS 基本命令 几乎和shell命令相同

# ./bin/hadoop fs -ls hdfs://master:9000/

# ./bin/hadoop fs -mkdir /test

# ./bin/hadoop fs -ls /

Found 1 items

drwxr-xr-x   - root supergroup          0 2018-01-29 21:35 /test

# ./bin/hadoop fs -rmdir /test

# ./bin/hadoop fs -mkdir /input

# ./bin/hadoop fs -put *.txt /input                    //上传文件

# ./bin/hadoop fs -ls /input

Found 3 items

-rw-r--r--   2 root supergroup      84854 2018-01-29 21:37 /input/LICENSE.txt

-rw-r--r--   2 root supergroup      14978 2018-01-29 21:37 /input/NOTICE.txt

-rw-r--r--   2 root supergroup       1366 2018-01-29 21:37 /input/README.txt

# ./bin/hadoop fs -get /input/README.txt /root/            //下载文件

# ls /root/README.txt 

/root/README.txt


HDFS 增加节点

– 1. 配置所有hadoop环境,包括主机名、ssh免密码登录、禁用 selinux、iptables、安装 java 环境

[root@newnode ~]# yum -y install java-1.8.0-openjdk-devel.x86_64 

[root@master ~] # cat /etc/hosts

192.168.4.1 master

192.168.4.2 node1

192.168.4.3 node2

192.168.4.4 node3

192.168.4.5  newnode


– 2. 修改namenode的slaves文件增加该节点

[root@master ~]# cd /usr/local/hadoop/etc/hadoop/

[root@master hadoop]# echo newnode >> slaves 


– 3. 把namnode的配置文件复制到配置文件目录下

# cat /root/rsyncfile.sh 

#!/bin/bash

for i in node{2..4}

do

  rsync -azSH --delete /usr/local/hadoop/etc/hadoop ${i}:/usr/local/hadoop/etc/ -e 'ssh' &

done

wait

[root@master hadoop]# bash /root/rsyncfile.sh


[root@newnode ~]# rsync -azSH --delete master:/usr/local/hadoop /usr/local


– 5. 在该节点启动Datanode

[root@newnode ~]# cd /usr/local/hadoop/

[root@newnode hadoop]# ./sbin/hadoop-daemon.sh start datanode

[root@newnode hadoop]# jps

4007 Jps

3705 DataNode


– 6. 查看集群状态

[root@master hadoop]# cd /usr/local/hadoop/

[root@master hadoop]# ./bin/hdfs dfsadmin -report

Safe mode is ON

Configured Capacity: 268304384000 (249.88 GB)

Present Capacity: 249863049216 (232.70 GB)

DFS Remaining: 249862311936 (232.70 GB)

DFS Used: 737280 (720 KB)

DFS Used%: 0.00%

Under replicated blocks: 0

Blocks with corrupt replicas: 0

Missing blocks: 0

Missing blocks (with replication factor 1): 0


-------------------------------------------------

Live datanodes (5):

...

Name: 192.168.4.5:50010 (newnode)

Hostname: newnode

Decommission Status : Normal

Configured Capacity: 53660876800 (49.98 GB)

DFS Used: 4096 (4 KB)

Non DFS Used: 3662835712 (3.41 GB)

DFS Remaining: 49998036992 (46.56 GB)

DFS Used%: 0.00%

DFS Remaining%: 93.17%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Sun Jan 28 20:30:23 EST 2018


...

– 7. 设置同步带宽,并同步数据

[root@master hadoop]# ./bin/hdfs dfsadmin -setBalancerBandwidth 67108864

[root@master hadoop]# ./sbin/start-balancer.sh -threshold 5


缩减节点

– 配置NameNode的hdfs-site.xml

– dfs.replication 副本数量

– 增加 dfs.hosts.exclude 配置

[root@master hadoop]# vim etc/hadoop/hdfs-site.xml 

...

  <property>

    <name>dfs.hosts.exclude</name>

    <value>/usr/local/hadoop/etc/hadoop/exclude</value>

  </property>

...


– 增加 exclude 配置文件,写入要删除的节点 ip

[root@master hadoop]# vim etc/hadoop/slaves 

node1

node2 

node3

[root@master hadoop]# vim  etc/hadoop/exclude

newnode

# cat /root/rsyncfile.sh 

#!/bin/bash

for i in node{1..5}

do

  rsync -azSH --delete /usr/local/hadoop/etc/hadoop ${i}:/usr/local/hadoop/etc/ -e 'ssh' &

done

wait

[root@master hadoop]# bash /root/rsyncfile.sh

[root@master hadoop]# ./bin/hdfs dfsadmin -refreshNodes

[root@master hadoop]# ./bin/hdfs dfsadmin -report

...

Name: 192.168.4.6:50010 (newnode)

Hostname: newnode

Decommission Status : Decommission in progress        //数据迁移状态

Configured Capacity: 53660876800 (49.98 GB)

DFS Used: 12288 (12 KB)

Non DFS Used: 3662950400 (3.41 GB)

DFS Remaining: 49997914112 (46.56 GB)

DFS Used%: 0.00%

DFS Remaining%: 93.17%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Sun Jan 28 20:52:01 EST 2018

...


[root@master hadoop]# ./bin/hdfs dfsadmin -report

...

Name: 192.168.4.6:50010 (newnode)

Hostname: newnode

Decommission Status : Decommissioned                    //最终状态

Configured Capacity: 53660876800 (49.98 GB)

DFS Used: 12288 (12 KB)

Non DFS Used: 3662950400 (3.41 GB)

DFS Remaining: 49997914112 (46.56 GB)

DFS Used%: 0.00%

DFS Remaining%: 93.17%

Configured Cache Capacity: 0 (0 B)

Cache Used: 0 (0 B)

Cache Remaining: 0 (0 B)

Cache Used%: 100.00%

Cache Remaining%: 0.00%

Xceivers: 1

Last contact: Sun Jan 28 20:52:43 EST 2018

...

//当节点状态变为 Decommissioned 状态时 才能停止节点 

[root@newnode hadoop]# ./sbin/hadoop-daemon.sh stop datanode

[root@newnode hadoop]# jps

4045 Jps




向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI