温馨提示×

温馨提示×

您好,登录后才能下订单哦!

密码登录×
登录注册×
其他方式登录
点击 登录注册 即表示同意《亿速云用户服务条款》

Flume如何整合kafka

发布时间:2021-12-13 17:05:43 来源:亿速云 阅读:199 作者:小新 栏目:云计算

这篇文章主要为大家展示了“Flume如何整合kafka”,内容简而易懂,条理清晰,希望能够帮助大家解决疑惑,下面让小编带领大家一起研究并学习一下“Flume如何整合kafka”这篇文章吧。

Using Kafka with Flume

在CDH 5.2.0 及更高的版本中, Flume 包含一个Kafka source and sink。使用它们可以让数据从Kafka流入Hadoop或者从任何Flume source 流入Kafka。

     重要提示:不能配置一个Kafka source发送数据到 a Kafka sink.如果这么做, the Kafka source sets the topic in the event header, overriding the sink configuration and creating an infinite loop, sending messages back and forth between the source and sink. If you need to use both a source and a sink, use an interceptor to modify the event header and set a different topic.

Kafka Source

使用Kafka source 让数据从Kafka topics 流入 Hadoop. The Kafka source 可以与任何Flume sink合并, 这样很容易把数据从 Kafka 写到 HDFS, HBase, 以及Solr.

下面的 Flume 配置示例,是使用 Kafka source 发送数据到 HDFS sink:

tier1.sources  = source1
 tier1.channels = channel1
 tier1.sinks = sink1
 
 tier1.sources.source1.type = org.apache.flume.source.kafka.KafkaSource
 tier1.sources.source1.zookeeperConnect = zk01.example.com:2181
 tier1.sources.source1.topic = weblogs
 tier1.sources.source1.groupId = flume
 tier1.sources.source1.channels = channel1
 tier1.sources.source1.interceptors = i1
 tier1.sources.source1.interceptors.i1.type = timestamp
 tier1.sources.source1.kafka.consumer.timeout.ms = 100
 
 tier1.channels.channel1.type = memory
 tier1.channels.channel1.capacity = 10000
 tier1.channels.channel1.transactionCapacity = 1000
 
 tier1.sinks.sink1.type = hdfs
 tier1.sinks.sink1.hdfs.path = /tmp/kafka/%{topic}/%y-%m-%d
 tier1.sinks.sink1.hdfs.rollInterval = 5
 tier1.sinks.sink1.hdfs.rollSize = 0
 tier1.sinks.sink1.hdfs.rollCount = 0
 tier1.sinks.sink1.hdfs.fileType = DataStream
 tier1.sinks.sink1.channel = channel1

为了更高的吞吐量, 可以配置多个Kafka sources读取一个 topic.如果所有sources配置一个相同的groupID, 并且topic 有多个分区, 设置每一个source 从不同的分区读取数据,就可以改善效率.

下面的列表描述Kafka source 支持的参数; 必须的参数使用粗体列出.

Table 1. Kafka Source Properties  

Property NameDefault ValueDescription
type  必须设置为org.apache.flume.source.kafka.KafkaSource.
zookeeperConnect  The URI of the ZooKeeper server or quorum used by Kafka. This can be a single node (for example, zk01.example.com:2181) or a comma-separated list of nodes in a ZooKeeper quorum (for example, zk01.example.com:2181,zk02.example.com:2181, zk03.example.com:2181).
topic  source 读取消息的Kafka topic。 Flume 每个source只支持一个 topic.。
groupIDflumeThe unique identifier of the Kafka consumer group. Set the same groupID in all sources to indicate that they belong to the same consumer group.
batchSize1000向channel写入消息的最多条数
batchDurationMillis1000向channel书写的最大时间 (毫秒)  。 
其他Kafka consumer  支持的属性 通过Kafka source配置Kafka consumer。可以使用任何consumer 支持的属性。 Prepend the consumer property name with the prefix kafka. (for example, kafka.fetch.min.bytes). See the Kafka documentation for the full list of Kafka consumer properties.

调优

Kafka source 重写了两个Kafka consumer 的属性:

  1. auto.commit.enable 设置为 false by the source, and every batch is committed. 为了改善性能, 设置为 true 改为使用 kafka.auto.commit.enable。 这个可能会丢失数据 if the source goes down before committing.

  2. consumer.timeout.ms设置为 10, so when Flume polls Kafka for new data, it waits no more than 10 ms for the data to be available. Setting this to a higher value can reduce CPU utilization due to less frequent polling, but introduces latency in writing batches to the channel.

Kafka Sink

使用Kafka sink 从一个 Flume source发送数据到 Kafka . You can use the Kafka sink in addition to Flume sinks such as HBase or HDFS.

The following Flume configuration example uses a Kafka sink with an exec source:

tier1.sources  = source1
 tier1.channels = channel1
 tier1.sinks = sink1
 
 tier1.sources.source1.type = exec
 tier1.sources.source1.command = /usr/bin/vmstat 1
 tier1.sources.source1.channels = channel1
 
 tier1.channels.channel1.type = memory
 tier1.channels.channel1.capacity = 10000
 tier1.channels.channel1.transactionCapacity = 1000
 
 tier1.sinks.sink1.type = org.apache.flume.sink.kafka.KafkaSink
 tier1.sinks.sink1.topic = sink1
 tier1.sinks.sink1.brokerList = kafka01.example.com:9092,kafka02.example.com:9092
 tier1.sinks.sink1.channel = channel1
 tier1.sinks.sink1.batchSize = 20

The following table describes parameters the Kafka sink supports; required properties are listed in bold.

Table 2. Kafka Sink Properties  

Property NameDefault ValueDescription
type  必须设置为: org.apache.flume.sink.kafka.KafkaSink.
brokerList  The brokers the Kafka sink uses to discover topic partitions, formatted as a comma-separated list of hostname:port entries. You do not need to specify the entire list of brokers, but Cloudera recommends that you specify at least two for high availability.
topicdefault-flume-topicThe Kafka topic to which messages are published by default. If the event header contains a topic field, the event is published to the designated topic, overriding the configured topic.
batchSize100The number of messages to process in a single batch. Specifying a larger batchSize can improve throughput and increase latency.
requiredAcks1The number of replicas that must acknowledge a message before it is written successfully. Possible values are 0 (do not wait for an acknowledgement), 1 (wait for the leader to acknowledge only), and -1 (wait for all replicas to acknowledge). To avoid potential loss of data in case of a leader failure, set this to -1.
其他Kafka producer所支持的属性 Used to configure the Kafka producer used by the Kafka sink. You can use any producer properties supported by Kafka. Prepend the producer property name with the prefix kafka. (for example, kafka.compression.codec). See the Kafka documentation for the full list of Kafka producer properties.

Kafka sink 使用 topic 以及 key properties from the FlumeEvent headers to determine where to send events in Kafka. If the header contains the topic property, that event is sent to the designated topic, overriding the configured topic. If the header contains the key property, that key is used to partition events within the topic. Events with the same key are sent to the same partition. If the key parameter is not specified, events are distributed randomly to partitions. Use these properties to control the topics and partitions to which events are sent through the Flume source or interceptor.

Kafka Channel

CDH 5.3 以及更高的版本包含一个Kafka channel to Flume in addition to the existing memory and file channels. 可以使用Kafka channel:

  • To write to Hadoop directly from Kafka without using a source.不使用source,从Kafka直接向hadoop中写数据。

  • To write to Kafka directly from Flume sources without additional buffering.不使用额外的缓冲区直接从Flume source向Kafka写数据。

  • As a reliable and highly available channel for any source/sink combination.可以与任何source/sink结合。

如下的 Flume 配置使用了一个Kafka channel 以及一个exec source 和 hdfs sink:  

tier1.sources = source1
tier1.channels = channel1
tier1.sinks = sink1

tier1.sources.source1.type = exec
tier1.sources.source1.command = /usr/bin/vmstat 1
tier1.sources.source1.channels = channel1

tier1.channels.channel1.type = org.apache.flume.channel.kafka.KafkaChannel
tier1.channels.channel1.capacity = 10000
tier1.channels.channel1.transactionCapacity = 1000
tier1.channels.channel1.brokerList = kafka02.example.com:9092,kafka03.example.com:9092
tier1.channels.channel1.topic = channel2
tier1.channels.channel1.zookeeperConnect = zk01.example.com:2181
tier1.channels.channel1.parseAsFlumeEvent = true

tier1.sinks.sink1.type = hdfs
tier1.sinks.sink1.hdfs.path = /tmp/kafka/channel
tier1.sinks.sink1.hdfs.rollInterval = 5
tier1.sinks.sink1.hdfs.rollSize = 0
tier1.sinks.sink1.hdfs.rollCount = 0
tier1.sinks.sink1.hdfs.fileType = DataStream
tier1.sinks.sink1.channel = channel1

下面的列表描述了Kafka channel 所支持的参数; 粗体为必要参数.

Table 3. Kafka Channel Properties  

Property NameDefault ValueDescription
type  必须设置为:org.apache.flume.channel.kafka.KafkaChannel.
brokerList  The brokers the Kafka channel uses to discover topic partitions, formatted as a comma-separated list of hostname:port entries. You do not need to specify the entire list of brokers, but Cloudera recommends that you specify at least two for high availability.
zookeeperConnect  The URI of the ZooKeeper server or quorum used by Kafka. This can be a single node (for example, zk01.example.com:2181) or a comma-separated list of nodes in a ZooKeeper quorum (for example, zk01.example.com:2181,zk02.example.com:2181, zk03.example.com:2181).
topicflume-channelThe Kafka topic the channel will use.
groupIDflumeThe unique identifier of the Kafka consumer group the channel uses to register with Kafka.
parseAsFlumeEventtrueSet to true if a Flume source is writing to the channel and expects AvroDataums with the FlumeEvent schema (org.apache.flume.source.avro.AvroFlumeEvent) in the channel. Set to false if other producers are writing to the topic that the channel is using.
readSmallestOffsetfalseIf true, reads all data in the topic. If false, reads only data written after the channel has started. Only used when parseAsFlumeEvent is false.
kafka.consumer.timeout.ms100当向sink写数据时轮询的间隔时间.
其他Kafka producer所支持的属性 Used to configure the Kafka producer. You can use any producer properties supported by Kafka. Prepend the producer property name with the prefix kafka. (for example, kafka.compression.codec). See the Kafka documentation for the full list of Kafka producer properties.


<< Using Kafka with Spark Streaming©2015 Cloudera, Inc. All rights reservedAdditional Information >>

Terms and Conditions  Privacy Policy

以上是“Flume如何整合kafka”这篇文章的所有内容,感谢各位的阅读!相信大家都有了一定的了解,希望分享的内容对大家有所帮助,如果还想学习更多知识,欢迎关注亿速云行业资讯频道!

向AI问一下细节

免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。

AI