一、概述
1.实验环境基于以前搭建的haoop HA;
2.spark HA所需要的zookeeper环境前文已经配置过,此处不再重复。
3.所需软件包为:scala-2.12.3.tgz、spark-2.2.0-bin-hadoop2.7.tar
4.主机规划
bd1 bd2 bd3 | Worker |
bd4 bd5 | Master、Worker |
二、配置Scala
1.解压并拷贝
[root@bd1 ~]# tar -zxf scala-2.12.3.tgz
[root@bd1 ~]# cp -r scala-2.12.3 /usr/local/
2.配置环境变量
[root@bd1 ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[root@bd1 ~]# source /etc/profile
3.验证
[root@bd1 ~]# scala -version
Scala code runner version 2.12.3 -- Copyright 2002-2017, LAMP/EPFL and Lightbend, Inc.
三、配置Spark
1.解压并拷贝
[root@bd1 ~]# tar -zxf spark-2.2.0-bin-hadoop2.7.tgz
[root@bd1 ~]# cp spark-2.2.0-bin-hadoop2.7 /usr/local/spark
2.配置环境变量
[root@bd1 ~]# vim /etc/profile
export SCALA_HOME=/usr/local/scala
export PATH=:$SCALA_HOME/bin:$PATH
[root@bd1 ~]# source /etc/profile
3.修改spark-env.sh #文件不存在需要拷贝模板
[root@bd1 conf]# vim spark-env.sh
export JAVA_HOME=/usr/local/jdk
export HADOOP_HOME=/usr/local/hadoop
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
export SCALA_HOME=/usr/local/scala
export SPARK_DAEMON_JAVA_OPTS="-Dspark.deploy.recoveryMode=ZOOKEEPER -Dspark.deploy.zookeeper.url=bd4:2181,bd5:2181 -Dspark.deploy.zookeeper.dir=/spark"
export SPARK_WORKER_MEMORY=1g
export SPARK_WORKER_CORES=2
export SPARK_WORKER_INSTANCES=1
4.修改spark-defaults.conf #文件不存在需要拷贝模板
[root@bd1 conf]# vim spark-defaults.conf
spark.master spark://master:7077
spark.eventLog.enabled true
spark.eventLog.dir hdfs://master:/user/spark/history
spark.serializer org.apache.spark.serializer.KryoSerializer
5.在HDFS文件系统中新建日志文件目录
hdfs dfs -mkdir -p /user/spark/history
hdfs dfs -chmod 777 /user/spark/history
6.修改slaves
[root@bd1 conf]# vim slaves
bd1
bd2
bd3
bd4
bd5
四、同步到其他主机
1.使用scp同步Scala到bd2-bd5
scp -r /usr/local/scala root@bd2:/usr/local/
scp -r /usr/local/scala root@bd3:/usr/local/
scp -r /usr/local/scala root@bd4:/usr/local/
scp -r /usr/local/scala root@bd5:/usr/local/
2.同步Spark到bd2-bd5
scp -r /usr/local/spark root@bd2:/usr/local/
scp -r /usr/local/spark root@bd3:/usr/local/
scp -r /usr/local/spark root@bd4:/usr/local/
scp -r /usr/local/spark root@bd5:/usr/local/
五、启动集群并测试HA
1.启动顺序为:zookeeper-->hadoop-->spark
2.启动spark
bd4:
[root@bd4 sbin]# cd /usr/local/spark/sbin/
[root@bd4 sbin]# ./start-all.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd4.out
bd4: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd4.out
bd2: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd2.out
bd3: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd3.out
bd5: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd5.out
bd1: starting org.apache.spark.deploy.worker.Worker, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.worker.Worker-1-bd1.out
[root@bd4 sbin]# jps
3153 DataNode
7235 Jps
3046 JournalNode
7017 Master
3290 NodeManager
7116 Worker
2958 QuorumPeerMain
bd5:
[root@bd5 sbin]# ./start-master.sh
starting org.apache.spark.deploy.master.Master, logging to /usr/local/spark/logs/spark-root-org.apache.spark.deploy.master.Master-1-bd5.out
[root@bd5 sbin]# jps
3584 NodeManager
5602 RunJar
3251 QuorumPeerMain
8564 Master
3447 DataNode
8649 Jps
8474 Worker
3340 JournalNode
3.停掉bd4的Master进程
[root@bd4 sbin]# kill -9 7017
[root@bd4 sbin]# jps
3153 DataNode
7282 Jps
3046 JournalNode
3290 NodeManager
7116 Worker
2958 QuorumPeerMain
五、总结
一开始时想把Master放到bd1和bd2上,但是启动Spark后发现两个节点上都是Standby。然后修改配置文件转移到bd4和bd5上,才顺利运行。换言之Spark HA的Master必须位于Zookeeper集群上才能正常运行,即该节点上要有JournalNode这个进程。
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。