本篇内容介绍了“Flume的Source怎么实现采集数据到通过内存输出到控制台”的有关知识,在实际案例的操作过程中,不少人都会遇到这样的困境,接下来就让小编带领大家学习一下如何处理这些情况吧!希望大家仔细阅读,能够学有所成!
需求:
Flume的Source从NetCat 和 Exec上采集数据到通过内存输出到控制台
这里memory也可以是file聚合,checkpointDir记录偏移量
# Use a channel which buffers events in memory agent1.channels.channel1.type = file agent1.channels.channel1.checkpointDir=/var/checkpoint agent1.channels.channel1.dataDirs=/var/tmp agent1.channels.channel1.capacity = 1000 agent1.channels.channel1.transactionCapactiy = 100
这里采取memory配置文件:
a1.sources = r1 r2 a1.sinks = k1 a1.channels = c1 a1.sources.r1.channels = c1 a1.sources.r1.type = netcat a1.sources.r1.bind = 0.0.0.0 a1.sources.r1.port = 44444 a1.sources.r2.channels = c1 a1.sources.r2.type = exec a1.sources.r2.command = tail -F /home/hadoop/data/data.log # Describe the sink a1.sinks.k1.type = logger # Use a channel which buffers events in memory a1.channels.c1.type = memory a1.channels.c1.capacity = 1000 a1.channels.c1.transactionCapacity = 100 # Bind the source and sink to the channel a1.sources.r1.channels = c1 a1.sources.r2.channels = c1 a1.sinks.k1.channel = c1
测试结果:
[hadoop@hadoop001 ~]$ telnet localhost 44444 Trying ::1... Connected to localhost. Escape character is '^]'. ZOURC123456789 OK [hadoop@hadoop001 data]$ echo 123 >> data.log [hadoop@hadoop001 data]$
控制台输出结果:
2018-08-10 20:12:10,426 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 31 32 33 123 } 2018-08-10 20:12:32,439 (SinkRunner-PollingRunner-DefaultSinkProcessor) [INFO - org.apache.flume.sink.LoggerSink.process(LoggerSink.java:94)] Event: { headers:{} body: 5A 4F 55 52 43 31 32 33 34 35 36 37 38 39 0D ZOURC123456789. }
“Flume的Source怎么实现采集数据到通过内存输出到控制台”的内容就介绍到这里了,感谢大家的阅读。如果想了解更多行业相关的知识可以关注亿速云网站,小编将为大家输出更多高质量的实用文章!
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。