hadoop-daemon.sh与hadoop-daemons.sh区别
hadoop-daemon.sh只能本地执行
hadoop-daemons.sh能远程执行
1. 启动JN
hadoop-daemons.sh start journalnode
hdfs namenode -initializeSharedEdits //复制edits log文件到journalnode节点上,第一次创建得在格式化namenode之后使用
http://hadoop-yarn1:8480来看journal是否正常
2.格式化namenode,并启动Active Namenode
一、Active NameNode节点上格式化namenode
hdfs namenode -format
hdfs namenode -initializeSharedEdits
初始化journalnode完毕
二、启动Active Namenode
hadoop-daemon.sh start namenode
3.启动 Standby namenode
一、Standby namenode节点上格式化Standby节点
复制Active Namenode上的元数据信息拷贝到Standby Namenode节点上
hdfs namenode -bootstrapStandby
二、启动Standby节点
hadoop-daemon.sh start namenode
4.启动Automatic Failover
在zookeeper上创建 /hadoop-ha/ns1这样一个监控节点(ZNode)
hdfs zkfc -formatZK
start-dfs.sh
5.查看namenode状态
hdfs haadmin -getServiceState nn1 active
6.自动failover
hdfs haadmin -failover nn1 nn2
配置文件详细信息
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://ns1</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/opt/modules/hadoop-2.2.0/data/tmp</value> </property> <property> <name>fs.trash.interval</name> <value>60*24</value> </property> <property> <name>ha.zookeeper.quorum</name> <value>hadoop-yarn1:2181,hadoop-yarn2:2181,hadoop-yarn3:2181</value> </property> <property> <name>hadoop.http.staticuser.user</name> <value>yuanhai</value> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.nameservices</name> <value>ns1</value> </property> <property> <name>dfs.ha.namenodes.ns1</name> <value>nn1,nn2</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn1</name> <value>hadoop-yarn1:8020</value> </property> <property> <name>dfs.namenode.rpc-address.ns1.nn2</name> <value>hadoop-yarn2:8020</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn1</name> <value>hadoop-yarn1:50070</value> </property> <property> <name>dfs.namenode.http-address.ns1.nn2</name> <value>hadoop-yarn2:50070</value> </property> <property> <name>dfs.namenode.shared.edits.dir</name> <value>qjournal://hadoop-yarn1:8485;hadoop-yarn2:8485;hadoop-yarn3:8485/ns1</value> </property> <property> <name>dfs.journalnode.edits.dir</name> <value>/opt/modules/hadoop-2.2.0/data/tmp/journal</value> </property> <property> <name>dfs.ha.automatic-failover.enabled</name> <value>true</value> </property> <property> <name>dfs.client.failover.proxy.provider.ns1</name> <value>org.apache.hadoop.hdfs.server.namenode.ha.ConfiguredFailoverProxyProvider</value> </property> <property> <name>dfs.ha.fencing.methods</name> <value>sshfence</value> </property> <property> <name>dfs.ha.fencing.ssh.private-key-files</name> <value>/home/hadoop/.ssh/id_rsa</value> </property> <property> <name>dfs.permissions.enabled</name> <value>false</value> </property> <!-- <property> <name>dfs.namenode.http-address</name> <value>hadoop-yarn.dragon.org:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>hadoop-yarn.dragon.org:50090</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>file://${hadoop.tmp.dir}/dfs/name</value> </property> <property> <name>dfs.namenode.edits.dir</name> <value>${dfs.namenode.name.dir}</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>file://${hadoop.tmp.dir}/dfs/data</value> </property> <property> <name>dfs.namenode.checkpoint.dir</name> <value>file://${hadoop.tmp.dir}/dfs/namesecondary</value> </property> <property> <name>dfs.namenode.checkpoint.edits.dir</name> <value>${dfs.namenode.checkpoint.dir}</value> </property> --> </configuration>
slaves
hadoop-yarn1 hadoop-yarn2 hadoop-yarn3
yarn-site.xml
<configuration> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.resourcemanager.hostname</name> <value>hadoop-yarn1</value> </property> <property> <name>yarn.log-aggregation-enable</name> <value>true</value> </property> <property> <name>yarn.log-aggregation.retain-seconds</name> <value>604800</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>hadoop-yarn1:10020</value> <description>MapReduce JobHistory Server IPC host:port</description> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>hadoop-yarn1:19888</value> <description>MapReduce JobHistory Server Web UI host:port</description> </property> <property> <name>mapreduce.job.ubertask.enable</name> <value>true</value> </property> </configuration>
hadoop-env.sh
export JAVA_HOME=/opt/modules/jdk1.6.0_24
其他相关文章:
http://blog.csdn.net/zhangzhaokun/article/details/17892857
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。