Spark2.x 快速入门教程 6

xiaoxiao2021-02-28  117

Spark Streaming 整合 Flume

一、实验介绍

1.1 实验内容

Flume 是非常流行的日志采集系统,可以作为 DStream 的高级数据源,本节实验将介绍如何让 Flume 推送消息给 Spark Streaming,然后 Spark Streaming 收到消息后进行处理。

1.2 先学课程

Hadoop入门进阶课程:https://www.shiyanlou.com/courses/237

1.3 实验知识点

Flume agentSpark Streaming

1.4 实验环境

spark-2.1.0-bin-hadoop2.6

Xfce 终端

1.5 适合人群

本课程属于初级难度级别,适合具有 Spark 基础的用户,如果对 Streaming 了解能够更好的上手本课程。

二、实验步骤

2.1 准备工作

双击打开桌面上的 Xfce 终端,用 sudo 命令切换到 hadoop 用户,hadoop 用户密码为 hadoop,用 cd 命令进入 /opt目录,因为用的本地的环境,可以不用启动任何进程也可以完成本节实验。

$ su hadoop $ cd /opt/ $ jps

2.2 Flume 的安装和准备工作

您可以通过下面命令将 Flume 下载到实验楼环境中,进行安装配置。

$ sudo wget http://labfile.oss.aliyuncs.com/courses/785/apache-flume-1.6.0-bin.tar.gz $ sudo tar -zxvf apache-flume-1.6.0-bin.tar.gz

修改 flume-env.sh

$ cd apache-flume-1.6.0-bin/conf/ $ sudo cp flume-env.sh.template flume-env.sh $ sudo vi flume-env.sh

flume-env.sh文件需修改内容:

export JAVA_HOME=/usr/lib/jvm/java-8-oracle # Give Flume more memory and pre-allocate, enable remote monitoring via JMX export JAVA_OPTS="-Xms100m -Xmx2000m -Dcom.sun.management.jmxremote"

创建 flume_push_spark 目录,并授权。

$ sudo mkdir /opt/flume_push_spark $ sudo chmod 777 -R /opt/

在 /conf 创建 agent 的配置文件 flume-push-spark.properties。

$ sudo vi flume-push-spark.properties # 添加如下内容 # Name the components on this agent a1.sources=r1 a1.sinks=k1 a1.channels=c1 # Describe/configure the source a1.sources.r1.type=spooldir a1.sources.r1.spoolDir=/opt/flume_push_spark a1.sources.r1.fileHeader = false a1.sources.r1.interceptors = i1 a1.sources.r1.interceptors.i1.type = timestamp # Use a channel a1.channels.c1.type=file a1.channels.c1.checkpointDir=/opt/logs_tmp_cp a1.channels.c1.dataDirs=/opt/logs_tmp # Describe the sink a1.sinks.k1.type = avro a1.sinks.k1.hostname = localhost a1.sinks.k1.port = 9999 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sinks.k1.channel = c1

上面的配置文件中,我们把 Flume Source 类别设置为 spooldir,

同时,我们把 Flume Sink 类别设置为 avro,绑定到 localhost 的 9999 端口,这样,Flume Source把该目录下的消息汇集到 Flume Sink以后推送给 localhost 的 9999 端口,而我们编写的 Spark Streaming 程序一直在监听 localhost 的 9999 端口,一旦有消息到达,就会被 Spark Streaming 应用程序取走进行处理。

2.3 代码实现

1). 创建 maven 项目

双击桌面上的图标打开 Scala IDE -> OK。

再次点击 Ok。

点击 File -> New ->Other

搜索 maven -> 点击 Maven Project -> Next

继续点击 Next

选中 "quickstart" -> Next

输入 "Group Id", "Artifact Id", " Package" -> Finish 。

点击 spark 项目 -> 修改 pom.xml -> 保存后会自动下载 jar 包

#添加如下内容并保存 <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-flume_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.4.1</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.4.1</version> </dependency>

<build> <sourceDirectory>src/main/java</sourceDirectory> <testSourceDirectory>src/main/test</testSourceDirectory> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass></mainClass> </manifest> </archive> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <includeProjectDependencies>true</includeProjectDependencies> <includePluginDependencies>false</includePluginDependencies> <classpathScope>compile</classpathScope> <mainClass>cn.com.syl.spark.App</mainClass> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build>

最终 pom.xml 如下:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd"> <modelVersion>4.0.0</modelVersion> <groupId>cn.com.syl</groupId> <artifactId>spark</artifactId> <version>0.0.1-SNAPSHOT</version> <packaging>jar</packaging> <name>spark</name> <url>http://maven.apache.org</url> <properties> <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding> </properties> <dependencies> <dependency> <groupId>junit</groupId> <artifactId>junit</artifactId> <version>3.8.1</version> <scope>test</scope> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-core_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-sql_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-hive_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-client</artifactId> <version>2.6.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-kafka_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.spark</groupId> <artifactId>spark-streaming-flume_2.10</artifactId> <version>1.5.1</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpclient</artifactId> <version>4.4.1</version> </dependency> <dependency> <groupId>org.apache.httpcomponents</groupId> <artifactId>httpcore</artifactId> <version>4.4.1</version> </dependency> </dependencies> <build> <sourceDirectory>src/main/java</sourceDirectory> <testSourceDirectory>src/main/test</testSourceDirectory> <plugins> <plugin> <artifactId>maven-assembly-plugin</artifactId> <configuration> <descriptorRefs> <descriptorRef>jar-with-dependencies</descriptorRef> </descriptorRefs> <archive> <manifest> <mainClass></mainClass> </manifest> </archive> </configuration> <executions> <execution> <id>make-assembly</id> <phase>package</phase> <goals> <goal>single</goal> </goals> </execution> </executions> </plugin> <plugin> <groupId>org.codehaus.mojo</groupId> <artifactId>exec-maven-plugin</artifactId> <version>1.2.1</version> <executions> <execution> <goals> <goal>exec</goal> </goals> </execution> </executions> <configuration> <executable>java</executable> <includeProjectDependencies>true</includeProjectDependencies> <includePluginDependencies>false</includePluginDependencies> <classpathScope>compile</classpathScope> <mainClass>cn.com.syl.App</mainClass> </configuration> </plugin> <plugin> <groupId>org.apache.maven.plugins</groupId> <artifactId>maven-compiler-plugin</artifactId> <configuration> <source>1.6</source> <target>1.6</target> </configuration> </plugin> </plugins> </build> </project>

文件下载的依赖项较多,需要一定的时间,请耐心等待完成。

可能会出现如下问题:

解决办法:

选中 "Project configuration ..."-> Quick Fix -> Finish

2). 编写代码

选中 cn.com.syl.spark 包 -> 用快捷键 Ctrl+N ->搜索 class -> 选中 java class -> Next

输入类名 -> Finish

FlumePushSpark.java 代码如下:

package cn.com.syl.spark; import java.util.Arrays; import org.apache.spark.SparkConf; import org.apache.spark.api.java.function.FlatMapFunction; import org.apache.spark.api.java.function.Function2; import org.apache.spark.api.java.function.PairFunction; import org.apache.spark.streaming.Durations; import org.apache.spark.streaming.api.java.JavaDStream; import org.apache.spark.streaming.api.java.JavaPairDStream; import org.apache.spark.streaming.api.java.JavaReceiverInputDStream; import org.apache.spark.streaming.api.java.JavaStreamingContext; import org.apache.spark.streaming.flume.FlumeUtils; import org.apache.spark.streaming.flume.SparkFlumeEvent; import scala.Tuple2; public class FlumePushDstream { public static void main(String[] args) { SparkConf conf = new SparkConf() .setMaster("local[3]") .setAppName("flumepushspark"); JavaStreamingContext jssc = new JavaStreamingContext(conf, Durations.seconds(5)); JavaReceiverInputDStream<SparkFlumeEvent> lines = FlumeUtils.createStream(jssc, "localhost", 9999); JavaDStream<String> words = lines.flatMap( new FlatMapFunction<SparkFlumeEvent, String>() { private static final long serialVersionUID = 1L; @Override public Iterable<String> call(SparkFlumeEvent event) throws Exception { String line = new String(event.event().getBody().array()); return Arrays.asList(line.split(" ")); } }); JavaPairDStream<String, Integer> pairs = words.mapToPair( new PairFunction<String, String, Integer>() { private static final long serialVersionUID = 1L; @Override public Tuple2<String, Integer> call(String word) throws Exception { return new Tuple2<String, Integer>(word, 1); } }); JavaPairDStream<String, Integer> wordCounts = pairs.reduceByKey( new Function2<Integer, Integer, Integer>() { private static final long serialVersionUID = 1L; @Override public Integer call(Integer v1, Integer v2) throws Exception { return v1 + v2; } }); wordCounts.print(); jssc.start(); jssc.awaitTermination(); jssc.close(); } }

注意:Spark Streaming必须先启动起来,监听9999端口,然后启动 flume 来让 flume 推送数据。

启动 Spark Streaming

打开 Xfce 终端启动 flume agent

$ bin/flume-ng agent --conf conf --conf-file conf/flume-push-spark.properties -name a1 -Dflume.root.logger=INFO,console

再另外开启 Xfce 终端拷贝一份文件到 /opt/flume_push_spark,

$ sudo cp /opt/spark-2.1.0-bin-hadoop2.6/README.md /opt/flume_push_spark

快速切换到scala IDE Console 控制台,屏幕上会显示程序运行的相关信息,并会每隔5秒钟刷新一次信息,大量信息中会包含如下重要信息,默认只显示前十条:

至此 flume push 数据到spark Streaming 顺利完成。实验结束后,要关闭各个终端,只要切换到该终端窗口,然后按键盘的 Ctrl+C 组合键,就可以结束程序运行。

补充知识:

假定您是在 windows 平台写的代码 ,您可以用打 jar 包的方式运行,具体如下:

选中项目 spark,右键 -> Run as -> Run Configurations

双击 Maven Build ->Workspace -> 选择 spark -> OK

在 Goals 输入框输入 compile package ->Apply -> Run

编译完成,选中项目 spark 按键f5刷新项目

展开 target,右键点击编译出的 jar 点击属性:

查看位置

同样的,Spark Streaming 必须先启动起来,监听9999端口,然后启动 flume 来让 flume 推送数据。

spark-2.1.0-bin-hadoop2.6/bin/spark-submit --class cn.com.syl.spark.FlumePushDstream --num-executors 2 --executor-cores 1 /home/shiyanlou/scalaIDE_workspace/spark/target/spark-0.0.1-SNAPSHOT-jar-with-dependencies.jar

另外打开 Xfce 终端启动 flume agent

$ bin/flume-ng agent --conf conf --conf-file conf/flume-push-spark.properties -name a1 -Dflume.root.logger=INFO,console

再另外开启 Xfce 终端拷贝一份文件到 /opt/flume_push_spark,

$ sudo cp /opt/spark-2.1.0-bin-hadoop2.6/RELEASE /opt/flume_push_spark

快速回到 Spark Streaming 终端,屏幕上会显示程序运行的相关信息,并会每隔5秒钟刷新一次信息,大量信息中会包含如下重要信息,默认只显示前十条,由于窗口滑动太快,这里没有截到数据,道理是相通的,学会方法就好了。

三、实验总结

本节课主要介绍了 Spark Streaming 与 Flume 的整合过程,并就 Windows 平台如何打 jar 包提交到远程服务器进行讲解,希望学完本节课,能帮助您理解Spark Streaming,并能很快上手。

四、参考阅读

http://spark.apache.org/docs/latest/streaming-programming-guide.html
转载请注明原文地址: https://www.6miu.com/read-64109.html

最新回复(0)