服务器基本信息
下载安装包jdk-8u271-linux-x64.tar.gz,并解压,所有节点做如下步骤:
tar -zxvf jdk-8u271-linux-x64.tar.gz -C /usr/local/src/ vi /etc/profile # JDK_HOME author:BIGDATA_N1 export JAVA_HOME=/usr/local/src/jdk1.8.0_271 export PATH=$PATH:$JAVA_HOME/bin:$JAVA_HOME/jre/bin:$PATH export CLASSPATH=.:$JAVA_HOME/lib:$JAVA_HOME/jre/lib source /etc/profile2 master节点配置
tar -zxvf spark-3.0.1-bin-hadoop3.2.tgz #配置环境变量 vi /etc/profile # SPARK_HOME author:BIGDATA_N1 export SPARK_HOME=/home/mppadmin/spark-2.4.7-bin-hadoop2.7 export PATH=$PATH:$SPARK_HOME/bin:$SPARK_HOME/sbin source /etc/profile #配置集群启动环境 [root@qfs010 conf]# cp slaves.template slaves [root@qfs010 conf]# vi slaves qfs011 qfs012 qfs013 qfs014 qfs015 qfs016 [root@qfs010 sbin]# vi spark-config.sh export JAVA_HOME=/usr/local/src/jdk1.8.0_2713 slave节点配置(qfs011-qfs016机器)
首先,需要配置jdk,见1中的配置jdk环境。
其次,拷贝qfs011安装包spark-2.4.7-bin-hadoop2.7到其他slave节点,无需任何配置。
[root@qfs010 spark-2.4.7-bin-hadoop2.7]# start-all.sh [mppadmin@qfs010 spark-2.4.7-bin-hadoop2.7]$ start-all.sh starting org.apache.spark.deploy.master.Master, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.master.Master-1-qfs010.out qfs011: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs011.out qfs014: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs014.out qfs015: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs015.out qfs012: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs012.out qfs013: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs013.out qfs016: starting org.apache.spark.deploy.worker.Worker, logging to /home/mppadmin/spark-2.4.7-bin-hadoop2.7/logs/spark-mppadmin-org.apache.spark.deploy.worker.Worker-1-qfs016.out5 使用spark-submit测试环境
[mppadmin@qfs010 jars]$ spark-submit --master spark://qfs010:7077 --executor-memory 20G --executor-cores 6 /home/mppadmin/spark-2.4.7-bin-hadoop2.7/examples/src/main/python/pi.py 21/02/04 14:49:26 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable 21/02/04 14:49:29 WARN TaskSetManager: Stage 0 contains a task of very large size (371 KB). The maximum recommended task size is 100 KB. Pi is roughly 3.140360
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)