`
astroboyx
  • 浏览: 24082 次
  • 性别: Icon_minigender_1
  • 来自: 上海
社区版块
存档分类

Flume Sinks

 
阅读更多

Flume Sinks

1、Flume’s Collector Tier Event Sinks

collectorSink( "fsdir","fsfileprefix",rollmillis)

collectorSink,数据通过collector汇聚之后发送到hdfs, fsdir 是hdfs目录,fsfileprefix为文件前缀码

2、Flume’s Agent Tier Event Sinks

agentSink[("machine"[,port])]

Defaults to agentE2ESink,如果省略,machine参数,默认使用flume.collector.event.host与flume.collector.event.port作为默认collecotr(以下同此)

agentE2ESink[("machine"[,port])]

执着的agent,如果agent发送event没有收到collector成功写入的状态码,该event将被agent重复发送,直到接到成功写入的状态码

agentDFOSink[("machine" [,port])]

本地热备agent,agent发现collector节点故障后,不断检查collector的存活状态以便重新发送event,在此间产生的数据将缓存到本地磁盘中

agentBESink[("machine"[,port])]

不负责的agent,如果collector故障,将不做任何处理,它发送的数据也将被直接丢弃

agentE2EChain("m1[:_p1_]" [,"m2[:_p2_]"[,…]])

指定多个collector提高可用性。 当向主collector发送event失效后,转向第二个collector发送,当所有的collector失败后,它会非常执着的再来一遍...

agentDFOChain("m1[:_p1_]"[, "m2[:_p2_]"[,…]])

同上,当向所有的collector发送事件失效后,他会将event缓存到本地磁盘,并检查collector状态,尝试重新发送

agentBEChain("m1[:_p1_]"[, "m2[:_p2_]"[,…]])

同上,当向所有的collector发送事件失效后,他会将event丢弃

autoE2EChain

无需指定collector, 由master协调管理event的流向

autoDFOChain

同上

autoBEChain

同上

3、Flume’s Logical Sinks

logicalSink("logicalnode")

4、Flume’s Basic Sinks

在不使用collector收集event的情况下,可将source直接发向basic sinks

null

null

console[("formatter")]

转发到控制台

text("txtfile" [,"formatter"])

转发到文本文件

seqfile("filename")

转发到seqfile

dfs("hdfspath")

转发到hdfs

customdfs("hdfspath"[, "format"])

自定义格式dfs

+escapedCustomDfs("hdfspath", "file", "format")

rpcSink("host"[, port])

Rpc框架

syslogTcp("host"[,port])

发向网络地址

irc("host",port, "nick", "chan")


分享到:
评论

相关推荐

    influxdb-flume-sink:处理 JSON 格式的 Flume 事件并将它们发送到 InfluxDB 系列的可配置 Flume Sink

    自定义 Flume sinks 的相关类或 jar 文件可以放在 Flume classpath 中,也可以将 sinks 与 Flume Source 一起编译。 配置水槽 将接收器的类型设置为类名,例如: agent1.sinks.influxDBSink1.type = org.apache....

    flume-ng-elasticsearch6-sink.zip

    flume1.9采集数据入存入elasticsearch6.2.4,flume1.9本身只支持低版本的elasticsearch,基于apache-flume-1.9.0-src的flume-ng-sinks/flume-ng-elasticsearch-sink源码修改,支持es6.2.4,打的包,直接替换flume/...

    flume-push.conf

    a1.sources.r1.spoolDir = /var/log/flume a1.sources.r1.fileHeader = true a1.sinks.k1.type = avro a1.sinks.k1.hostname = 192.168.10.130 a1.sinks.k1.port = 9999 a1.channels.c1.type = memory a1....

    Using.Flume.Flexible.Scalable.and.Reliable.Data.Streaming.pdf

    Dive into key Flume components, including sources that accept data and sinks that write and deliver it Write custom plugins to customize the way Flume receives, modifies, formats, and writes data ...

    .Using.Flume.Flexible.Scalable.and.Reliable.Data.Streaming

    Dive into key Flume components, including sources that accept data and sinks that write and deliver it Write custom plugins to customize the way Flume receives, modifies, formats, and writes data ...

    flume包,用于数据的采集

    flume的包。flume是一个分布式、可靠、和高可用的海量日志采集、聚合和传输的系统。支持在日志系统中定制各类数据发送方,用于收集数据;同时,Flume提供对数据进行简单处理,并写到各种数据接受方(比如文本、HDFS、...

    flume连hdfs需要的1个包.rar

    commons-io-2.4.jar,hadoop-auth-2.7.3.jar,hadoop-common-2.7.3.jar,hadoop-hdfs-2.7.3.jar,htrace-core-3.1.0-incubating.jar五个包是flume1.7连hdfs所需要的外部包,这是其中一个,其他四个看其他的下载。

    Apache flume1.6_src

    Apache flume1.6 的源码,源码写的很详细,底层的技术,channel sinks source的关系,启动顺序等。

    Using Flume(O'Reilly,2014)

    * Dive into key Flume components, including sources that accept data and sinks that write and deliver it * Write custom plugins to customize the way Flume receives, modifies, formats, and writes data ...

    Apache Flume Distributed Log Collection For Hadoop

    For each architectural component (Sources, Channels, Sinks, Channel Processors, Sink Groups, and so on) the various implementations will be covered in detail along with configuration options....

    Apache Flume- Distributed Log Collection for Hadoop(PACKT,2013)

    For each architectural component (Sources, Channels, Sinks, Channel Processors, Sink Groups, and so on) the various implementations will be covered in detail along with configuration options....

    flume-to-cos

    由于flume sink cos所依赖的COSN文件系统为Hadoop兼容的文件系统,因此可以通过定义HdfsSink来将管道流Sink到COSN中,这里只需要修改hdfs的sink选项即可:...# 在${agentName}.sinks.${sink}.hdfs.haconfigs选项中${...

    Apache Flume Distributed Log Collection for Hadoop(PACKT,2ed,2015)

    It explores channels, sinks, and sink processors, followed by sources and channels. By the end of this book, you will be fully equipped to construct a series of Flume agents to dynamically transport ...

    Flume-InfluxDB-Sink:Flume Sink与最新的InfluxDB版本兼容

    要部署它,请在flume类路径中复制flume-influxdb-sink-0.0.2.jar及其依赖项。 一个胖罐子,包括maven在build中的所有依赖项,因此也可以将其复制。 配置 这是示例接收器配置: agent.sinks.influx.type = ...

    快速学习-Flume 对接 Kafka

    第 6 章 Flume 对接 Kafka 1)配置 flume(flume-kafka.conf) # define a1.sources = r1 a1.sinks = k1 a1.channels = c1 # source a1.sources.r1.type = exec ...a1.sinks.k1.type = org.apache.flume.sink.kafka.Kafka

    flume-ng-kafka-sink

    水槽-ng-kafka-sink 注意flume-ng-kafka-sink 已合并到即将发布的flume 1.6 中。 这个 repo不会在维护中。 该项目用于 与进行通信。Kafka Sink的配置 agent_log.sinks.kafka.type = ...

    flume-logstash-sink:Flume sink 使用 logstash V2 格式将日志事件发送到 ES 集群

    [agent_name].sinks.[sink_name].indexNameBuilder = com.gilt.flume.elasticsearch.TimeBasedIndexNameBuilderV12 并设置序列化程序: [agent_name].sinks.[sink_name].serializer = ...

    Flume监听oracle表增量的步骤详解

    需求:获取oracle表增量信息,发送至udp514端口,支持ip配置 步骤: (1)需要的jar oracle的 odbc5.jar(oracle安装目录 /jdbc/...a1.sinks = k1 a1.channels = c1 #接收syslog配置 #a1.sources.r1.type = syslogudp

    FlumeKafkaSink:Flume-ng Sink 插件生成到 Kafka

    a1.sinks = k1 k2 a1.channels = c1 c2 # Describe/configure the source a1.sources.r1.type = netcat a1.sources.r1.bind = 192.168.2.102 a1.sources.r1.port = 44444 a1.channels.c1.type =

    flume-ng-router-sinks

    flume-ng路由器接收器 将flume-ng接收器扩展到类似路由器的接收器,后者可以根据事件的特定标头路由事件。 AvroRouterSink MailRouterSink 与水槽1.4.0、1.5.0、1.5.0.1兼容。 为什么路由器下沉 已经提供了类似...

Global site tag (gtag.js) - Google Analytics