- 一、SparkStreaming 概述
- 二、Dstream 入门
- 1、WordCount 案例实 ***
- 三、DStream 创建
- 1、RDD 队列
- 2、自定义数据源
- 3、Kafka 数据源
- 四、DStream 转换
- 1、无状态转化 *** 作
- 1.1、Transform
- 1.2、 join
- 2、有状态转化 *** 作
- 2.1、 UpdateStateByKey
- 2.2、WindowOperations
- 五、DStream 输出
- 六、优雅关闭
- 七、SparkStreaming 案例实 ***
- 1、需求一:广告黑名单
- 2、需求二:广告点击量实时统计
- 3、需求三:最近一小时广告点击量
Spark Streaming 用于流式数据的处理。Spark Streaming 支持的数据输入源很多,例如:Kafka、Flume、Twitter、ZeroMQ 和简单的 TCP 套接字等等。数据输入后可以用 Spark 的高度抽象原语如:map、reduce、join、window 等进行运算。而结果也能保存在很多地方,如 HDFS,数据库等
Spark Streaming 的特点:
- 易用
- 容错
- 易整合到 Spark 体系
Spark Streaming 架构
二、Dstream 入门 1、WordCount 案例实 ***整体架构图
SparkStreaming 架构图
添加依赖
org.apache.spark spark-streaming_2.123.0.0
编写代码
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream} import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming01_WordCount { def main(args: Array[String]): Unit = { // TODO 创建环境对象 // StreamingContext创建时,需要传递两个参数 // 第一个参数表示环境配置 val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") // 第二个参数表示批量处理的周期(采集周期) val ssc = new StreamingContext(sparkConf, Seconds(3)) // TODO 逻辑处理 // 获取端口数据 val lines: ReceiverInputDStream[String] = ssc.socketTextStream("localhost", 9999) val words = lines.flatMap(_.split(" ")) val wordToOne = words.map((_,1)) val wordToCount: DStream[(String, Int)] = wordToOne.reduceByKey(_+_) wordToCount.print() // 由于SparkStreaming采集器是长期执行的任务,所以不能直接关闭 // 如果main方法执行完毕,应用程序也会自动结束。所以不能让main执行完毕 //ssc.stop() // 1. 启动采集器 ssc.start() // 2. 等待采集器的关闭 ssc.awaitTermination() } }三、DStream 创建 1、RDD 队列
测试过程中,可以通过使用ssc.queueStream(queueOfRDDs)来创建 DStream,每一个推送到这个队列中的 RDD,都会作为一个 DStream 处理。
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.rdd.RDD import org.apache.spark.streaming.dstream.{DStream, ReceiverInputDStream} import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable object SparkStreaming02_Queue { def main(args: Array[String]): Unit = { // TODO 创建环境对象 // StreamingContext创建时,需要传递两个参数 // 第一个参数表示环境配置 val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") // 第二个参数表示批量处理的周期(采集周期) val ssc = new StreamingContext(sparkConf, Seconds(3)) val rddQueue = new mutable.Queue[RDD[Int]]() val inputStream = ssc.queueStream(rddQueue,oneAtATime = false) val mappedStream = inputStream.map((_,1)) val reducedStream = mappedStream.reduceByKey(_ + _) reducedStream.print() ssc.start() for (i <- 1 to 5) { rddQueue += ssc.sparkContext.makeRDD(1 to 300, 10) Thread.sleep(2000) } ssc.awaitTermination() } }
------------------------------------------- Time: 1539075280000 ms ------------------------------------------- (4,60) (0,60) (6,60) (8,60) (2,60) (1,60) (3,60) (7,60) (9,60) (5,60) ------------------------------------------- Time: 1539075284000 ms ------------------------------------------- (4,60) (0,60) (6,60) (8,60) (2,60) (1,60) (3,60) (7,60) (9,60) (5,60) ------------------------------------------- Time: 1539075288000 ms ------------------------------------------- (4,30) (0,30) (6,30) (8,30) (2,30) (1,30) (3,30) (7,30) (9,30) (5,30) ------------------------------------------- Time: 1539075292000 ms -------------------------------------------2、自定义数据源
需要继承 Receiver,并实现 onStart、onStop 方法来自定义数据源采集
自定义数据源:
package com.atguigu.bigdata.spark.streaming import java.util.Random import org.apache.spark.SparkConf import org.apache.spark.rdd.RDD import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.dstream.ReceiverInputDStream import org.apache.spark.streaming.receiver.Receiver import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable object SparkStreaming03_DIY { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val messageDS: ReceiverInputDStream[String] = ssc.receiverStream(new MyReceiver()) messageDS.print() ssc.start() ssc.awaitTermination() } class MyReceiver extends Receiver[String](StorageLevel.MEMORY_ONLY) { private var flg = true override def onStart(): Unit = { new Thread(new Runnable { override def run(): Unit = { while ( flg ) { val message = "采集的数据为:" + new Random().nextInt(10).toString store(message) Thread.sleep(500) } } }).start() } override def onStop(): Unit = { flg = false; } } }
使用自定义的数据源采集数据:
object FileStream { def main(args: Array[String]): Unit = { //1.初始化 Spark 配置信息 val sparkConf = new SparkConf().setMaster("local[*]") .setAppName("StreamWordCount") //2.初始化 SparkStreamingContext val ssc = new StreamingContext(sparkConf, Seconds(5)) //3.创建自定义 receiver 的 Streaming val lineStream = ssc.receiverStream(new CustomerReceiver("hadoop102", 9999)) //4.将每一行数据做切分,形成一个个单词 val wordStream = lineStream.flatMap(_.split("t")) //5.将单词映射成元组(word,1) val wordAndoneStream = wordStream.map((_, 1)) //6.将相同的单词次数做统计 val wordAndCountStream = wordAndOneStream.reduceByKey(_ + _) //7.打印 wordAndCountStream.print() //8.启动 SparkStreamingContext ssc.start() ssc.awaitTermination() } }3、Kafka 数据源
ReceiverAPI:需要一个专门的 Executor 去接收数据,然后发送给其他的 Executor 做计算。存在的问题,接收数据的 Executor 和计算的 Executor 速度会有所不同,特别在接收数据的 Executor速度大于计算的 Executor 速度,会导致计算数据的节点内存溢出。早期版本中提供此方式,当前版本不适用DirectAPI:是由计算的 Executor 来主动消费 Kafka 的数据,速度由自身控制
需求:通过 SparkStreaming 从 Kafka 读取数据,并将读取过来的数据做简单计算,最终打印到控制台
导入依赖:
org.apache.spark spark-streaming-kafka-0-8_2.112.4.5
package com.atguigu.bigdata.spark.streaming import java.util.Random import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.storage.StorageLevel import org.apache.spark.streaming.dstream.{InputDStream, ReceiverInputDStream} import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.receiver.Receiver import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming04_Kafka { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) kafkaDataDS.map(_.value()).print() ssc.start() ssc.awaitTermination() } }四、DStream 转换
1、无状态转化 *** 作 1.1、TransformDStream 上的 *** 作与 RDD 的类似,分为 Transformations(转换)和 Output Operations(输出)两种,此外转换 *** 作中还有一些比较特殊的原语,如:updateStateByKey()、transform()以及
各种 Window 相关的原语
Transform 允许 DStream 上执行任意的 RDD-to-RDD 函数。即使这些函数并没有在 DStream的 API 中暴露出来,通过该函数可以方便的扩展 Spark API。该函数每一批次调度一次。其实也就是对 DStream 中的 RDD 应用转换
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming06_State_Transform { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val lines = ssc.socketTextStream("localhost", 9999) // transform方法可以将底层RDD获取到后进行 *** 作 // 1. DStream功能不完善 // 2. 需要代码周期性的执行 // Code : Driver端 val newDS: DStream[String] = lines.transform( rdd => { // Code : Driver端,(周期性执行) rdd.map( str => { // Code : Executor端 str } ) } ) // Code : Driver端 val newDS1: DStream[String] = lines.map( data => { // Code : Executor端 data } ) ssc.start() ssc.awaitTermination() } }1.2、 join
两个流之间的 join 需要两个流的批次大小一致,这样才能做到同时触发计算。计算过程就是 对当前批次的两个流中各自的 RDD 进行
join,与两个 RDD 的 join 效果相同。
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming06_State_Join { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(5)) val data9999 = ssc.socketTextStream("localhost", 9999) val data8888 = ssc.socketTextStream("localhost", 8888) val map9999: DStream[(String, Int)] = data9999.map((_,9)) val map8888: DStream[(String, Int)] = data8888.map((_,8)) // 所谓的DStream的Join *** 作,其实就是两个RDD的join val joinDS: DStream[(String, (Int, Int))] = map9999.join(map8888) joinDS.print() ssc.start() ssc.awaitTermination() } }2、有状态转化 *** 作 2.1、 UpdateStateByKey
package com.atguigu.bigdata.spark.streaming import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming05_State { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) ssc.checkpoint("cp") // 无状态数据 *** 作,只对当前的采集周期内的数据进行处理 // 在某些场合下,需要保留数据统计结果(状态),实现数据的汇总 // 使用有状态 *** 作时,需要设定检查点路径 val datas = ssc.socketTextStream("localhost", 9999) val wordToOne = datas.map((_,1)) //val wordToCount = wordToOne.reduceByKey(_+_) // updateStateByKey:根据key对数据的状态进行更新 // 传递的参数中含有两个值 // 第一个值表示相同的key的value数据 // 第二个值表示缓存区相同key的value数据 val state = wordToOne.updateStateByKey( ( seq:Seq[Int], buff:Option[Int] ) => { val newCount = buff.getOrElse(0) + seq.sum Option(newCount) } ) state.print() ssc.start() ssc.awaitTermination() } }2.2、WindowOperations
Window Operations 可以设置窗口的大小和滑动窗口的间隔来动态的获取当前 Steaming 的允许 状态。所有基于窗口的 *** 作都需要两个参数,分别为窗口时长以及滑动步长。
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming06_State_Window { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val lines = ssc.socketTextStream("localhost", 9999) val wordToOne = lines.map((_,1)) // 窗口的范围应该是采集周期的整数倍 // 窗口可以滑动的,但是默认情况下,一个采集周期进行滑动 // 这样的话,可能会出现重复数据的计算,为了避免这种情况,可以改变滑动的滑动(步长) val windowDS: DStream[(String, Int)] = wordToOne.window(Seconds(6), Seconds(6)) val wordToCount = windowDS.reduceByKey(_+_) wordToCount.print() ssc.start() ssc.awaitTermination() } }
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming06_State_Window1 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) ssc.checkpoint("cp") val lines = ssc.socketTextStream("localhost", 9999) val wordToOne = lines.map((_,1)) // reduceByKeyAndWindow : 当窗口范围比较大,但是滑动幅度比较小,那么可以采用增加数据和删除数据的方式 // 无需重复计算,提升性能。 val windowDS: DStream[(String, Int)] = wordToOne.reduceByKeyAndWindow( (x:Int, y:Int) => { x + y}, (x:Int, y:Int) => {x - y}, Seconds(9), Seconds(3)) windowDS.print() ssc.start() ssc.awaitTermination() } }五、DStream 输出
➢ print():在运行流程序的驱动结点上打印 DStream 中每一批次数据的最开始 10 个元素。这
用于开发和调试。在 Python API 中,同样的 *** 作叫 print()。
➢ saveAsTextFiles(prefix, [suffix]):以 text 文件形式存储这个 DStream 的内容。每一批次的存
储文件名基于参数中的 prefix 和 suffix。”prefix-Time_IN_MS[.suffix]”。
➢ saveAsObjectFiles(prefix, [suffix]):以 Java 对象序列化的方式将 Stream 中的数据保存为
SequenceFiles . 每一批次的存储文件名基于参数中的为"prefix-TIME_IN_MS[.suffix]". Python
中目前不可用。
➢ saveAsHadoopFiles(prefix, [suffix]):将 Stream 中的数据保存为 Hadoop files. 每一批次的存
储文件名基于参数中的为"prefix-TIME_IN_MS[.suffix]"。Python API 中目前不可用。
➢ foreachRDD(func):这是最通用的输出 *** 作,即将函数 func 用于产生于 stream 的每一个
RDD。其中参数传入的函数 func 应该实现将每一个 RDD 中数据推送到外部系统,如将
RDD 存入文件或者通过网络将其写入数据库
注意:
- 连接不能写在 driver 层面(序列化)
- 如果写在 foreach 则每个 RDD 中的每一条数据都创建,得不偿失;
- 增加 foreachPartition,在分区创建(获取)
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming07_Output { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) ssc.checkpoint("cp") val lines = ssc.socketTextStream("localhost", 9999) val wordToOne = lines.map((_,1)) val windowDS: DStream[(String, Int)] = wordToOne.reduceByKeyAndWindow( (x:Int, y:Int) => { x + y}, (x:Int, y:Int) => {x - y}, Seconds(9), Seconds(3)) // SparkStreaming如何没有输出 *** 作,那么会提示错误 //windowDS.print() ssc.start() ssc.awaitTermination() } }
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming07_Output1 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) ssc.checkpoint("cp") val lines = ssc.socketTextStream("localhost", 9999) val wordToOne = lines.map((_,1)) val windowDS: DStream[(String, Int)] = wordToOne.reduceByKeyAndWindow( (x:Int, y:Int) => { x + y}, (x:Int, y:Int) => {x - y}, Seconds(9), Seconds(3)) // foreachRDD不会出现时间戳 windowDS.foreachRDD( rdd => { } ) ssc.start() ssc.awaitTermination() } }六、优雅关闭
流式任务需要 7*24 小时执行,但是有时涉及到升级代码需要主动停止程序,但是分
布式程序,没办法做到一个个进程去杀死,所有配置优雅的关闭就显得至关重要了。
使用外部文件系统来控制内部程序关闭
package com.atguigu.bigdata.spark.streaming import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.DStream import org.apache.spark.streaming.{Seconds, StreamingContext, StreamingContextState} object SparkStreaming08_Close { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val lines = ssc.socketTextStream("localhost", 9999) val wordToOne = lines.map((_,1)) wordToOne.print() ssc.start() // 如果想要关闭采集器,那么需要创建新的线程 // 而且需要在第三方程序中增加关闭状态 new Thread( new Runnable { override def run(): Unit = { // 优雅地关闭 // 计算节点不在接收新的数据,而是将现有的数据处理完毕,然后关闭 // Mysql : Table(stopSpark) => Row => data // Redis : Data(K-V) // ZK : /stopSpark // HDFS : /stopSpark Thread.sleep(5000) val state: StreamingContextState = ssc.getState() if ( state == StreamingContextState.ACTIVE ) { ssc.stop(true, true) } System.exit(0) } } ).start() ssc.awaitTermination() // block 阻塞main线程 } }七、SparkStreaming 案例实 ***
package com.atguigu.bigdata.spark.streaming import java.util.{Properties, Random} import org.apache.kafka.clients.producer.{KafkaProducer, ProducerConfig, ProducerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable.ListBuffer object SparkStreaming10_MockData { def main(args: Array[String]): Unit = { // 生成模拟数据 // 格式 :timestamp area city userid adid // 含义: 时间戳 区域 城市 用户 广告 // Application => Kafka => SparkStreaming => Analysis val prop = new Properties() // 添加配置 prop.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, "linux1:9092") prop.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer") prop.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, "org.apache.kafka.common.serialization.StringSerializer") val producer = new KafkaProducer[String, String](prop) while ( true ) { mockdata().foreach( data => { // 向Kafka中生成数据 val record = new ProducerRecord[String, String]("atguiguNew", data) producer.send(record) println(data) } ) Thread.sleep(2000) } } def mockdata() = { val list = ListBuffer[String]() val areaList = ListBuffer[String]("华北", "华东", "华南") val cityList = ListBuffer[String]("北京", "上海", "深圳") for ( i <- 1 to new Random().nextInt(50) ) { val area = areaList(new Random().nextInt(3)) val city = cityList(new Random().nextInt(3)) var userid = new Random().nextInt(6) + 1 var adid = new Random().nextInt(6) + 1 list.append(s"${System.currentTimeMillis()} ${area} ${city} ${userid} ${adid}") } list } }1、需求一:广告黑名单
package com.atguigu.bigdata.spark.streaming import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming11_Req1 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) kafkaDataDS.map(_.value()).print() ssc.start() ssc.awaitTermination() } }
package com.atguigu.bigdata.spark.streaming import java.sql.ResultSet import java.text.SimpleDateFormat import com.atguigu.bigdata.spark.util.JDBCUtil import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable.ListBuffer object SparkStreaming11_Req1_BlackList { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) val adClickData = kafkaDataDS.map( kafkaData => { val data = kafkaData.value() val datas = data.split(" ") AdClickData(datas(0),datas(1),datas(2),datas(3),datas(4)) } ) val ds = adClickData.transform( rdd => { // TODO 通过JDBC周期性获取黑名单数据 val blackList = ListBuffer[String]() val conn = JDBCUtil.getConnection val pstat = conn.prepareStatement("select userid from black_list") val rs: ResultSet = pstat.executeQuery() while ( rs.next() ) { blackList.append(rs.getString(1)) } rs.close() pstat.close() conn.close() // TODO 判断点击用户是否在黑名单中 val filterRDD = rdd.filter( data => { !blackList.contains(data.user) } ) // TODO 如果用户不在黑名单中,那么进行统计数量(每个采集周期) filterRDD.map( data => { val sdf = new SimpleDateFormat("yyyy-MM-dd") val day = sdf.format(new java.util.Date( data.ts.toLong )) val user = data.user val ad = data.ad (( day, user, ad ), 1) // (word, count) } ).reduceByKey(_+_) } ) ds.foreachRDD( rdd => { rdd.foreach{ case ( ( day, user, ad ), count ) => { println(s"${day} ${user} ${ad} ${count}") if ( count >= 30 ) { // TODO 如果统计数量超过点击阈值(30),那么将用户拉入到黑名单 val conn = JDBCUtil.getConnection val pstat = conn.prepareStatement( """ |insert into black_list (userid) values (?) |on DUPLICATE KEY |UPDATe userid = ? """.stripMargin) pstat.setString(1, user) pstat.setString(2, user) pstat.executeUpdate() pstat.close() conn.close() } else { // TODO 如果没有超过阈值,那么需要将当天的广告点击数量进行更新。 val conn = JDBCUtil.getConnection val pstat = conn.prepareStatement( """ | select | * | from user_ad_count | where dt = ? and userid = ? and adid = ? """.stripMargin) pstat.setString(1, day) pstat.setString(2, user) pstat.setString(3, ad) val rs = pstat.executeQuery() // 查询统计表数据 if ( rs.next() ) { // 如果存在数据,那么更新 val pstat1 = conn.prepareStatement( """ | update user_ad_count | set count = count + ? | where dt = ? and userid = ? and adid = ? """.stripMargin) pstat1.setInt(1, count) pstat1.setString(2, day) pstat1.setString(3, user) pstat1.setString(4, ad) pstat1.executeUpdate() pstat1.close() // TODO 判断更新后的点击数据是否超过阈值,如果超过,那么将用户拉入到黑名单。 val pstat2 = conn.prepareStatement( """ |select | * |from user_ad_count |where dt = ? and userid = ? and adid = ? and count >= 30 """.stripMargin) pstat2.setString(1, day) pstat2.setString(2, user) pstat2.setString(3, ad) val rs2 = pstat2.executeQuery() if ( rs2.next() ) { val pstat3 = conn.prepareStatement( """ |insert into black_list (userid) values (?) |on DUPLICATE KEY |UPDATe userid = ? """.stripMargin) pstat3.setString(1, user) pstat3.setString(2, user) pstat3.executeUpdate() pstat3.close() } rs2.close() pstat2.close() } else { // 如果不存在数据,那么新增 val pstat1 = conn.prepareStatement( """ | insert into user_ad_count ( dt, userid, adid, count ) values ( ?, ?, ?, ? ) """.stripMargin) pstat1.setString(1, day) pstat1.setString(2, user) pstat1.setString(3, ad) pstat1.setInt(4, count) pstat1.executeUpdate() pstat1.close() } rs.close() pstat.close() conn.close() } } } } ) ssc.start() ssc.awaitTermination() } // 广告点击数据 case class AdClickData( ts:String, area:String, city:String, user:String, ad:String ) }
package com.atguigu.bigdata.spark.streaming import java.sql.ResultSet import java.text.SimpleDateFormat import com.atguigu.bigdata.spark.util.JDBCUtil import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable.ListBuffer object SparkStreaming11_Req1_BlackList1 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) val adClickData = kafkaDataDS.map( kafkaData => { val data = kafkaData.value() val datas = data.split(" ") AdClickData(datas(0),datas(1),datas(2),datas(3),datas(4)) } ) val ds = adClickData.transform( rdd => { // TODO 通过JDBC周期性获取黑名单数据 val blackList = ListBuffer[String]() val conn = JDBCUtil.getConnection val pstat = conn.prepareStatement("select userid from black_list") val rs: ResultSet = pstat.executeQuery() while ( rs.next() ) { blackList.append(rs.getString(1)) } rs.close() pstat.close() conn.close() // TODO 判断点击用户是否在黑名单中 val filterRDD = rdd.filter( data => { !blackList.contains(data.user) } ) // TODO 如果用户不在黑名单中,那么进行统计数量(每个采集周期) filterRDD.map( data => { val sdf = new SimpleDateFormat("yyyy-MM-dd") val day = sdf.format(new java.util.Date( data.ts.toLong )) val user = data.user val ad = data.ad (( day, user, ad ), 1) // (word, count) } ).reduceByKey(_+_) } ) ds.foreachRDD( rdd => { // rdd. foreach方法会每一条数据创建连接 // foreach方法是RDD的算子,算子之外的代码是在Driver端执行,算子内的代码是在Executor端执行 // 这样就会涉及闭包 *** 作,Driver端的数据就需要传递到Executor端,需要将数据进行序列化 // 数据库的连接对象是不能序列化的。 // RDD提供了一个算子可以有效提升效率 : foreachPartition // 可以一个分区创建一个连接对象,这样可以大幅度减少连接对象的数量,提升效率 rdd.foreachPartition(iter => { val conn = JDBCUtil.getConnection iter.foreach{ case ( ( day, user, ad ), count ) => { } } conn.close() } ) rdd.foreach{ case ( ( day, user, ad ), count ) => { println(s"${day} ${user} ${ad} ${count}") if ( count >= 30 ) { // TODO 如果统计数量超过点击阈值(30),那么将用户拉入到黑名单 val conn = JDBCUtil.getConnection val sql = """ |insert into black_list (userid) values (?) |on DUPLICATE KEY |UPDATe userid = ? """.stripMargin JDBCUtil.executeUpdate(conn, sql, Array( user, user )) conn.close() } else { // TODO 如果没有超过阈值,那么需要将当天的广告点击数量进行更新。 val conn = JDBCUtil.getConnection val sql = """ | select | * | from user_ad_count | where dt = ? and userid = ? and adid = ? """.stripMargin val flg = JDBCUtil.isExist(conn, sql, Array( day, user, ad )) // 查询统计表数据 if ( flg ) { // 如果存在数据,那么更新 val sql1 = """ | update user_ad_count | set count = count + ? | where dt = ? and userid = ? and adid = ? """.stripMargin JDBCUtil.executeUpdate(conn, sql1, Array(count, day, user, ad)) // TODO 判断更新后的点击数据是否超过阈值,如果超过,那么将用户拉入到黑名单。 val sql2 = """ |select | * |from user_ad_count |where dt = ? and userid = ? and adid = ? and count >= 30 """.stripMargin val flg1 = JDBCUtil.isExist(conn, sql2, Array( day, user, ad )) if ( flg1 ) { val sql3 = """ |insert into black_list (userid) values (?) |on DUPLICATE KEY |UPDATE userid = ? """.stripMargin JDBCUtil.executeUpdate(conn, sql3, Array( user, user )) } } else { val sql4 = """ | insert into user_ad_count ( dt, userid, adid, count ) values ( ?, ?, ?, ? ) """.stripMargin JDBCUtil.executeUpdate(conn, sql4, Array( day, user, ad, count )) } conn.close() } } } } ) ssc.start() ssc.awaitTermination() } // 广告点击数据 case class AdClickData( ts:String, area:String, city:String, user:String, ad:String ) }2、需求二:广告点击量实时统计
package com.atguigu.bigdata.spark.streaming import java.text.SimpleDateFormat import com.atguigu.bigdata.spark.streaming.SparkStreaming11_Req1_BlackList.AdClickData import com.atguigu.bigdata.spark.util.JDBCUtil import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming12_Req2 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(3)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) val adClickData = kafkaDataDS.map( kafkaData => { val data = kafkaData.value() val datas = data.split(" ") AdClickData(datas(0),datas(1),datas(2),datas(3),datas(4)) } ) val reduceDS = adClickData.map( data => { val sdf = new SimpleDateFormat("yyyy-MM-dd") val day = sdf.format(new java.util.Date( data.ts.toLong )) val area = data.area val city = data.city val ad = data.ad ( ( day, area, city, ad ), 1 ) } ).reduceByKey(_+_) reduceDS.foreachRDD( rdd => { rdd.foreachPartition( iter => { val conn = JDBCUtil.getConnection val pstat = conn.prepareStatement( """ | insert into area_city_ad_count ( dt, area, city, adid, count ) | values ( ?, ?, ?, ?, ? ) | on DUPLICATE KEY | UPDATE count = count + ? """.stripMargin) iter.foreach{ case ( ( day, area, city, ad ), sum ) => { pstat.setString(1,day ) pstat.setString(2,area ) pstat.setString(3, city) pstat.setString(4, ad) pstat.setInt(5, sum) pstat.setInt(6,sum ) pstat.executeUpdate() } } pstat.close() conn.close() } ) } ) ssc.start() ssc.awaitTermination() } // 广告点击数据 case class AdClickData( ts:String, area:String, city:String, user:String, ad:String ) }3、需求三:最近一小时广告点击量
package com.atguigu.bigdata.spark.streaming import java.text.SimpleDateFormat import com.atguigu.bigdata.spark.util.JDBCUtil import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} object SparkStreaming13_Req3 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(5)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) val adClickData = kafkaDataDS.map( kafkaData => { val data = kafkaData.value() val datas = data.split(" ") AdClickData(datas(0),datas(1),datas(2),datas(3),datas(4)) } ) // 最近一分钟,每10秒计算一次 // 12:01 => 12:00 // 12:11 => 12:10 // 12:19 => 12:10 // 12:25 => 12:20 // 12:59 => 12:50 // 55 => 50, 49 => 40, 32 => 30 // 55 / 10 * 10 => 50 // 49 / 10 * 10 => 40 // 32 / 10 * 10 => 30 // 这里涉及窗口的计算 val reduceDS = adClickData.map( data => { val ts = data.ts.toLong val newTS = ts / 10000 * 10000 ( newTS, 1 ) } ).reduceByKeyAndWindow((x:Int,y:Int)=>{x+y}, Seconds(60), Seconds(10)) reduceDS.print() ssc.start() ssc.awaitTermination() } // 广告点击数据 case class AdClickData( ts:String, area:String, city:String, user:String, ad:String ) }
package com.atguigu.bigdata.spark.streaming import java.io.{File, FileWriter, PrintWriter} import java.text.SimpleDateFormat import org.apache.kafka.clients.consumer.{ConsumerConfig, ConsumerRecord} import org.apache.spark.SparkConf import org.apache.spark.streaming.dstream.InputDStream import org.apache.spark.streaming.kafka010.{ConsumerStrategies, KafkaUtils, LocationStrategies} import org.apache.spark.streaming.{Seconds, StreamingContext} import scala.collection.mutable.ListBuffer object SparkStreaming13_Req31 { def main(args: Array[String]): Unit = { val sparkConf = new SparkConf().setMaster("local[*]").setAppName("SparkStreaming") val ssc = new StreamingContext(sparkConf, Seconds(5)) val kafkaPara: Map[String, Object] = Map[String, Object]( ConsumerConfig.BOOTSTRAP_SERVERS_ConFIG -> "linux1:9092,linux2:9092,linux3:9092", ConsumerConfig.GROUP_ID_ConFIG -> "atguigu", "key.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer", "value.deserializer" -> "org.apache.kafka.common.serialization.StringDeserializer" ) val kafkaDataDS: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, LocationStrategies.PreferConsistent, ConsumerStrategies.Subscribe[String, String](Set("atguiguNew"), kafkaPara) ) val adClickData = kafkaDataDS.map( kafkaData => { val data = kafkaData.value() val datas = data.split(" ") AdClickData(datas(0),datas(1),datas(2),datas(3),datas(4)) } ) // 最近一分钟,每10秒计算一次 // 12:01 => 12:00 // 12:11 => 12:10 // 12:19 => 12:10 // 12:25 => 12:20 // 12:59 => 12:50 // 55 => 50, 49 => 40, 32 => 30 // 55 / 10 * 10 => 50 // 49 / 10 * 10 => 40 // 32 / 10 * 10 => 30 // 这里涉及窗口的计算 val reduceDS = adClickData.map( data => { val ts = data.ts.toLong val newTS = ts / 10000 * 10000 ( newTS, 1 ) } ).reduceByKeyAndWindow((x:Int,y:Int)=>{x+y}, Seconds(60), Seconds(10)) //reduceDS.print() reduceDS.foreachRDD( rdd => { val list = ListBuffer[String]() val datas: Array[(Long, Int)] = rdd.sortByKey(true).collect() datas.foreach{ case ( time, cnt ) => { val timeString = new SimpleDateFormat("mm:ss").format(new java.util.Date(time.toLong)) list.append(s"""{"xtime":"${timeString}", "yval":"${cnt}"}""") } } // 输出文件 val out = new PrintWriter(new FileWriter(new File("D:\mineworkspace\idea\classes\atguigu-classes\datas\adclick\adclick.json"))) out.println("["+list.mkString(",")+"]") out.flush() out.close() } ) ssc.start() ssc.awaitTermination() } // 广告点击数据 case class AdClickData( ts:String, area:String, city:String, user:String, ad:String ) }
欢迎分享,转载请注明来源:内存溢出
评论列表(0条)