这篇文章主要介绍Hadoop环境如何配置及启动,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
core-site.xml
<configuration> <property> <name>fs.defaultFS</name> <value>hdfs://slave2.hadoop:8020</value> <final>true</final> </property> <property> <name>hadoop.tmp.dir</name> <value>file:/home/hadoop/hadoop-root/tmp</value> </property> <property> <name>fs.checkpoint.period</name> <value>300</value> <description>The number of seconds between two periodic checkpoints.</description> </property> <property> <name>fs.checkpoint.size</name> <value>67108864</value> <description>The size of the current edit log (in bytes) that triggers a periodic checkpoint even if the fs.checkpoint.period hasn't expired. </description> </property> <property> <name>fs.checkpoint.dir</name> <value>${hadoop.tmp.dir}/dfs/namesecondary</value> <description>Determines where on the local filesystem the DFS secondary namenode should store the temporary images to merge.If this is a comma-delimited list of directories then the image is replicated in all of the directories for redundancy.</description> </property> </configuration>
hdfs-site.xml
<configuration> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop/hadoop-root/dfs/name</value> <final>true</final> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/hadoop/hadoop-root/dfs/data</value> <final>true</final> </property> <property> <name>dfs.replication</name> <value>3</value> </property> <property> <name>dfs.permissions</name> <value>false</value> </property> <property> <name>dfs.namenode.secondary.http-address</name> <value>slave1:50090</value> </property> </configuration>
mapred-site.xml
<configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapred.system.dir</name> <value>/home/hadoop/hadoop-root/mapred/system</value> <final>true</final> </property> <property> <name>mapred.local.dir</name> <value>/home/hadoop/hadoop-root/mapred/local</value> <final>true</final> </property> <property> <name>mapreduce.tasktracker.map.tasks.maximum</name> <value>2</value> </property> <property> <name>mapreduce.tasktracker.reduce.tasks.maximum</name> <value>1</value> </property> <property> <name>mapreduce.job.maps</name> <value>2</value> </property> <property> <name>mapreduce.job.reduces</name> <value>1</value> </property> <property> <name>mapreduce.tasktracker.http.threads</name> <value>50</value> </property> <property> <name>io.sort.factor</name> <value>20</value> </property> <property> <name>mapred.child.java.opts</name> <value>-Xmx400m</value> </property> <property> <name>mapreduce.task.io.sort.mb</name> <value>200</value> </property> <property> <name>mapreduce.map.sort.spill.percent</name> <value>0.8</value> </property> <property> <name>mapreduce.map.output.compress</name> <value>true</value> </property> <property> <name>mapreduce.map.output.compress.codec</name> <value>org.apache.hadoop.io.compress.DefaultCodec</value> </property> <property> <name>mapreduce.reduce.shuffle.parallelcopies</name> <value>10</value> </property> </configuration>
一、恢复hadoop
1、停止所有服务
2、删除/home/hadoop/hadoop-root/dfs下的data和name,并且重新建立
3、删除/home/hadoop/hadoop-root/tmp下的文件
4、在namenode节点执行 hadoop namenode -format
5、启动hadoop服务
-----自此hadoop恢复----
6、停止hbase服务,停不掉就杀掉
7、(多个节点)进入/tmp/hbase-root/zookeeper 删除所有文件
8、启动hbase服务
以上是“Hadoop环境如何配置及启动”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。