1.环境说明
master 192.168.0.223 mesos-master
slave 192.168.0.225 mesos-salve
2.环境准备
关闭防火墙
关闭selinux
两台机器修改主机名master/slave
设置hosts,能互相解析
3.master和slave配置ssh互信
这里配置hadoop用户双机互信,因为hadoop用hadoop用户启动
master
yum -y install sshpass
ssh-keygen 一路回车
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.0.220
slave
yum -y install sshpass
ssh-keygen 一路回车
ssh-copy-id -i ~/.ssh/id_rsa.pub hadoop@192.168.0.201
测试ssh对方主机,不提示输入密码则OK
4.安装JDK
tar zxvf jdk-8u65-linux-x64.tar.gz
mv jdk1.8.0_65 /usr/jdk
4.1设置环境变量
export JAVA_HOME=/usr/jdk
export JRE_HOME=/usr/jdk/jre
export CLASSPATH=.:$CLASSPATH:$JAVA_HOME/lib:$JRE_HOME/lib
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
执行 source /etc/profile
4.2测试JDK
java -version #出现版本信息
5.安装mesos主从,看其他博客
安装完成,会在/usr/local/lib下生成一个libmesos.so文件
6.安装和配置Hadoop
master和slave
tar zxvf hadoop-2.5.0-cdh6.4.8.tar.gz
mv hadoop-2.5.0-cdh6.4.8 /usr/hadoop
cd /usr/hadoop
mkdir -p tmp
cd /usr/hadoop/
mv bin bin-mapreduce2/
ln -s bin-mapreduce1 bin
mv example example-mapreduce2
ln -s example-mapreduce1 example
cd etc/
mv hadoop hadoop-mapreduce2
ln -s hadoop-mapreduce1 hadoop
7.添加hadoop环境变量
vim /etc/profile
export HADOOP_HOME=/usr/hadoop
export PATH=$PATH:$HADOOP_HOME:$HADOOP_HOME/bin
source /etc/profile
8.获取hadoop on mesos的jar包
yum -y install maven openjdk-7-jdk git
git clone
cd hadoop
mvn package #获取jar包,jar包会在target下
9.把获取的jar包放到hadoop安装目录下
master和slave
cp hadoop/target/hadoop-mesos-0.1.0.jar /usr/hadoop/share/hadoop/common/lib/
10.配置hadoop on mesos
master和slave
vim /usr/hadoop/etc/hadoop/mapred.site.xml
<property>
<name>mapred.job.tracker</name>
<value>localhost:9001</value>
</property>
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.MesosScheduler</value>
</property>
<property>
<name>mapred.mesos.taskScheduler</name>
<value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
</property>
<property>
<name>mapred.mesos.master</name>
<value>zk://192.168.0.223</value>
</property>
<property>
<name>mapred.mesos.executor.uri</name>
<value>hdfs://localhost:9000/hadoop-2.5.0-cdh6.2.0.tar.gz</value>
</property>
11.给hadoop用户权限
master和slave
chown -R hadoop:hadoop /usr/hadoop
12.在master上启动jobtracker,连接mesos
su hadoop
MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so hadoop jobtracker
13.测试
输入 192.168.0.223:5050 看看框架里有没有hadoop
----------------------------------------------------------------------------------------
二、搭建完HDFS,搭建hadoop on mesos
按照搭建hadoop文档配置完HDFS,需要配置core-site.xml,hdfs-site.xml,mapred-site.xml
mv bin bin-mapreduce2
ln -s bin-mapreduce1 bin
#不用移动hadoop,搭建hadoop on mesos 的时候直接在hadoop里修改
2.移动hdfs命令和start-dfs.sh
cd /usr/hadoop/bin-mapreduce2
cp hdfs /usr/hadoop/bin-mapreduce1
cd /usr/hadoop/sbin
cp start-dfs.sh /sur/hadoop/bin-mapreduce1
3.搭建hadoop on mesos
修改mapred-site.xml配置文件
<property>
<name>mapred.job.tracker</name>
<value>localhost:9002</value> #改为9002 避免和hdfs-site.xml中的端口冲突
</property>
<property>
<name>mapred.jobtracker.taskScheduler</name>
<value>org.apache.hadoop.mapred.MesosScheduler</value>
</property>
<property>
<name>mapred.mesos.taskScheduler</name>
<value>org.apache.hadoop.mapred.JobQueueTaskScheduler</value>
</property>
<property>
<name>mapred.mesos.master</name>
<value>zk://192.168.0.223</value>
</property>
<property>
<name>mapred.mesos.executor.uri</name>
<value>hdfs://localhost:9000/hadoop-2.5.0-cdh6.2.0.tar.gz</value>
</property>
4.格式化和启动HDFS
hdfs namenode -foramt
start-dfs.sh
5.启动hadoop on mesos
su hadoop
MESOS_NATIVE_LIBRARY=/usr/local/lib/libmesos.so hadoop jobtracker
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。