这篇文章主要介绍Spark1.6.1和Hadoop2.6.4完全分布式安装的示例分析,文中介绍的非常详细,具有一定的参考价值,感兴趣的小伙伴们一定要看完!
前期准备: 以下安装包均可在官网下载
hadoop-2.6.4.tar.gz jdk-7u71-linux-x64.tar.gz scala-2.10.4.tgz spark-1.6.1-bin-hadoop2.6.tgz
本人的硬件环境为:
master:虚拟内核8 内存16.0GB slave1:虚拟内核4 内存10.0GB slave2:虚拟内核4 内存10.0GB slave3:虚拟内核4 内存10.0GB slave4:虚拟内核4 内存10.0GB
将5台机器分别命名为master、slave1、slave2、slave3、slave4:
在master这台电脑上 sudo vim /etc/hostname master
在将5台机器均配置相同hosts:
sudo vim /etc/hosts 127.0.0.1 localhost 127.0.1.1 master/slave1/... 192.168.80.70 master 192.168.80.71 slave1 192.168.80.72 slave2 192.168.80.73 slave3 192.168.80.74 slave4
配置好后,重启,之后可以在master上ping slave1
配置ssh:
所有节点,使用 ssh-keygen -t rsa 一路按回车就行了。 ①在master上将公钥放到authorized_keys里。命令:sudo cat id_rsa.pub >> authorized_keys ②将master上的authorized_keys放到其他linux的~/.ssh目录下。 命令:scp authorized_keys root@salve1:~/.ssh ③修改authorized_keys权限,命令:chmod 644 authorized_keys ssh localhost以及ssh master ④测试是否成功 ssh slave1 输入用户名密码,然后退出,再次ssh host2不用密码,直接进入系统。这就表示成功了。 所有节点关闭防火墙 ufw disable
编辑配置文件:
vim /etc/profile export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_71 export PATH=JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH export CLASSPATH=$CLASSPATH:.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib export SCALA_HOME=/opt/scala/scala-2.10.4 export PATH=/opt/scala/scala-2.10.4/bin:$PATH export PATH=$PATH:$JAVA_HOME/bin export HADOOP_HOME=/root/hadoop-2.6.4 export HADOOP_COMMON_HOME=$HADOOP_HOME export HADOOP_HDFS_HOME=$HADOOP_HOME export HADOOP_MAPRED_HOME=$HADOOP_HOME export HADOOP_YARN_HOME=$HADOOP_HOME export HADOOP_CONF_DIR=$HADOOP_HOME/etc/hadoop export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOOME/sbin:$HADOOP_HOME/lib export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native export HADOOP_OPTS="-Djava.library.path=$HADOOP_HOME/lib" export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin export SPARK_HOME=/root/spark-1.6.1-bin-hadoop2.6 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin source /etc/profile
vim hadoop-env.sh export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_71 export HADOOP_CONF_DIR=/root/hadoop-2.6.4/etc/hadoop/ source hadoop-env.sh
vim yarn-env.sh export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_71 source yarn-env.sh
vim spark-env.sh export SPARK_MASTER_IP=master export SPARK_MASTER_PORT=7077 export SPARK_WORKER_CORES=4 export SPARK_WORKER_MEMORY=4g export SPARK_WORKER_INSTANCES=2 export JAVA_HOME=/usr/lib/jvm/jdk1.7.0_71 export SCALA_HOME=/opt/scala/scala-2.10.4 export HADOOP_HOME=/root/hadoop-2.6.4 source spark-env.sh
Spark和Hadoop均需要修改slaves
vim slaves slave1 slave2 slave3 slave4
Hadoop相关配置:
vim core-site.xml <configuration> <property> <name>hadoop.tmp.dir</name> <value>/root/hadoop-2.6.4/tmp</value> </property> <property> <name>fs.default.name</name> <value>hdfs://master:9000</value> </property> </configuration>
vim hdfs-site.xml <configuration> <property> <name>dfs.http.address</name> <value>master:50070</value> </property> <property> <name>dfs.namenode.secondary.http-address</name><value>master:50090</value> </property> <property> <name>dfs.replication</name> <value>1</value> </property> </configuration>
vim mapred-site.xml <configuration> <property> <name>mapred.job.tracker</name> <value>master:9001</value> </property> <property> <name>mapred.map.tasks</name> <value>20</value> </property> <property> <name>mapred.reduce.tasks</name> <value>4</value> </property> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name><value>master:10020</value> </property> <property><name>mapreduce.jobhistory.webapp.address</name><value>master:19888</value> </property> </configuration>
vim yarn-site.xml <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.address</name> <value>master:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>master:8030</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>master:8088</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name><value>master:8031</value> </property> <property> <name>yarn.resourcemanager.admin.address</name><value>master:8033</value> </property> <property> <name>yarn.nodemanager.aux-services</name><value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name><value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> </configuration>
配置完上述内容后,在master节点上将上述两个解压包分发到slave1~slave4节点上:
scp -r spark-1.6.1-bin-hadoop2.6 root@slave1:~/ scp -r hadoop-2.6.4 root@slave1:~/
注意ssh要提前配置好,Hadoop运行测试这里不再赘述,注意 jps命令查看状态
启动测试Spark
./sbin/start-all.sh
测试Spark自带的例子
./bin/spark-submit --master spark://master:7077 --class org.apache.spark.examples.SparkPi /root/spark-1.6.1-bin-hadoop2.6/lib/spark-examples-1.6.1-hadoop2.6.0.jar
测试Spark shell
./bin/spark-shell --master spark://master:7077
以上是“Spark1.6.1和Hadoop2.6.4完全分布式安装的示例分析”这篇文章的所有内容,感谢各位的阅读!希望分享的内容对大家有帮助,更多相关知识,欢迎关注亿速云行业资讯频道!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。