温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

hadoop1.1.2伪分布安装的示例分析

发布时间:2021-11-12 13:58:33 来源:亿速云 阅读:119 作者:小新 栏目:云计算

这篇文章主要介绍了hadoop1.1.2伪分布安装的示例分析,具有一定借鉴价值,感兴趣的朋友可以参考下,希望大家阅读完这篇文章之后大有收获,下面让小编带着大家一起了解一下。

1.伪分布式的安装

      1.1 修改ip

                   (1)打开VMWare或者VirtualBox的虚拟网卡

                   (2)在VMWare或者VirtualBox设置网络连接方式为host-only

                   (3)在linux中,修改ip。有上角的图标,右键,选择Edit  Connections....

                            ****ip必须与windows下虚拟网卡的ip在同一个网段,网关必须是存在的。

                   (4)重启网卡,执行命令service network restart

                            ****报错,如no suitable adapter错误,

                   (5)验证:执行命令ifconfig

      1.2 关闭防火墙

                   (1)执行命令:service iptables stop 关闭防火墙

                   (2)验证:执行命令service iptables status

      1.3 关闭防火墙的自动开启

                   (1)执行命令chkconfig iptables off

                   (2)验证:执行命令chkconfig --list|grep iptables

     1.4 修改hostname

                   (1)执行命令hostname cloud4  修改会话中的hostname

                   (2)验证:执行命令hostname

                   (3)执行命令vi  /etc/sysconfig/network 修改文件中的hostname

                   (4)验证:执行命令reboot -h now 重启机器

      1.5 设置ip与hostname绑定

                   (1)执行命令vi  /etc/hosts

                            在文本最后增加一行192.168.80.100 cloud4

                   (2)验证:ping cloud4

(3)在window中配置:主机名对应的ip

C:\Windows\System32\drivers\etc\hosts

      1.6 ssh免密码登陆

                   (1)执行命令ssh-keygen -t rsa (然后一路Enter)  产生秘钥位于/root/.ssh/

                   (2)执行命令cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys  产生授权文件

                   (3)验证:ssh localhost  (ssh 主机名)

      1.7 安装jdk

                   (1)使用winscp把jdk、hadoop复制到linux的/root/Downloads

                   (2)cp  /root/Downloads/*  /usr/local

                   (3)cd /usr/local

                            赋予执行权限 chmod u+x  jdk-6u24-linux-i586.bin

                   (4)./jdk-6u24-linux-i586.bin

                   (5)重命名 mv jdk1.6.0_24  jdk

                   (6)执行命令 vi /etc/profile 设置环境变量 

                            增加两行         export JAVA_HOME=/usr/local/jdk

                                                  export PATH=.:$JAVA_HOME/bin:$PATH

                            保存退出

                      执行命令  source  /etc/profile

                    (7)验证:执行命令java -version

      1.8 安装hadoop

                   (1)执行命令 tar -zxvf hadoop-1.1.2.tar.gz  解压缩

                   (2)执行命令  mv hadoop-1.1.2  hadoop

                   (3)执行命令 vi  /etc/profile  设置环境变量

                            增加一行         export HADOOP_HOME=/usr/local/hadoop

                            修改一行         export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

                            保存退出

                      执行命令  source  /etc/profile     

                   (4)验证:执行命令 hadoop

                   (5)修改位于conf/的配置文件hadoop-env.sh、core-site.xml、hdfs-site.xml、mapred-site.xml

                            <1>文件hadoop-env.sh的第9行

                            export JAVA_HOME=/usr/local/jdk/

                            <2>文件core-site.xml

                            <configuration>

                                     <property>

                                               <name>fs.default.name</name>

                                               <value>hdfs://cloud4:9000</value>

                                               <description>change your own hostname</description>

                                     </property>

                                     <property>

                                               <name>hadoop.tmp.dir</name>

                                               <value>/usr/local/hadoop/tmp</value>

                                     </property> 

                            </configuration>

                            <3>文件hdfs-site.xml

                            <configuration>

                                     <property>

                                               <name>dfs.replication</name>    #表示设置副本数,默认是3

                                               <value>1</value>

                                     </property>

                                     <property>

                                               <name>dfs.permissions</name>   #表示是否设置权限控制

                                               <value>false</value>

                                     </property>

                            </configuration>

如果是super-user(超级用户),它是nameNode进程的标识。系统不会执行任何权限检查

                            <4>文件mapred-site.xml

                            <configuration>

                                     <property>

                                               <name>mapred.job.tracker</name>

                                               <value>cloud4:9001</value>

                                               <description>change your own hostname</description>

                                     </property>

                            </configuration>

                   (6)执行命令 hadoop namenode -format 进行格式化

                   (7)执行命令 start-all.sh 启动hadoop

                   (8)验证:

                            <1>执行命令jps 查看java进程,发现5个进程,分别是NameNode、SecondaryNameNode、DataNode、JobTracker、TaskTracker

                            <2>通过浏览器查看:http://cloud4:50070 和 http://cloud4:50030

                                     *****修改windows的C:/Windows/system32/drivers/etc/目录下的hosts文件

1.9如果去掉警告提示:

[root@cloud4 ~]# hadoop fs -ls /

Warning: $HADOOP_HOME is deprecated.(去掉警告)

方法如下:

[root@cloud4 ~]# vi /etc/profile   (添加一句话)

# /etc/profile

export HADOOP_HOME_WARN_SUPPRESS=1

export JAVA_HOME=/usr/local/jdk

export HADOOP_HOME=/usr/local/hadoop

export PATH=.:$HADOOP_HOME/bin:$JAVA_HOME/bin:$PATH

[root@cloud4 ~]# source /etc/peofile  (立即生效)

感谢你能够认真阅读完这篇文章,希望小编分享的“hadoop1.1.2伪分布安装的示例分析”这篇文章对大家有帮助,同时也希望大家多多支持亿速云,关注亿速云行业资讯频道,更多相关知识等着你来学习!

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI