zoukankan      html  css  js  c++  java
  • Flume—(4)单数据源多出口

    1)案例需求

      使用Flume-1监控文件变动,Flume-1将变动内容传递给Flume-2,Flume-2负责存储到HDFS。同时Flume-1将变动内容传底给Flume-3,Flume-3负责传递给Local FileSystem。

    2)需求分析

     

    3)实现步骤

    1.准备工作

    在/opt/module/flume-1.9.0/job目录下创建group1文件夹

    [ck@hadoop102 job]$ mkdir group1
    [ck@hadoop102 job]$ cd group1/

    在/opt/module/datas/目录下创建flume3文件夹

    [ck@hadoop102 datas]$ mkdir flume3

    2.创建flume-file-flume.conf

    配置1个接收日志文件的source和两个Channel、两个sink,分别输送给flume-flume-hdfs 和flume-flume-dir。

    创建配置文件并打开

    [ck@hadoop102 group1]$ touch flume-file-flume.conf
    [ck@hadoop102 group1]$ vim flume-file-flume.conf

    添加如下内容:

    #Name the components on this agent
    
    a1.sources = r1
    a1.sinks = k1 k2
    a1.channels = c1 c2
    
    #将数据流复制给所有Channel
    a1.sources.r1.selector.type = replicating
    
    #Describe/configure the source
    a1.sources.r1.type = exec
    a1.sources.r1.command = tail -F /opt/module/hive/logs/hive.log
    a1.sources.r1.shell = /bin/bash -c
    
    #Describe the sink
    a1.sinks.k1.type = avro
    a1.sinks.k1.hostname = hadoop102
    a1.sinks.k1.port = 4141
    a1.sinks.k2.type = avro
    a1.sinks.k2.hostname = hadoop102
    a1.sinks.k2.port = 4142
    
    #Describe the channel
    a1.channels.c1.type = memory
    a1.channels.c1.capacity = 1000
    a1.channels.c1.transactionCapacity = 100
    a1.channels.c2.type = memory
    a1.channels.c2.capacity = 1000
    a1.channels.c2.transactionCapacity = 100
    
    #Bind the Source and sink to the channel
    a1.sources.r1.channels = c1 c2
    a1.sinks.k1.channel = c1
    a1.sinks.k2.channel = c2

     注:Avro是hadoop创始人Doug Cutting创建的一种语言无关的数据序列化和RPC框架。

     注:RPC(Remote Procedure Call)已远程过程调用,他是一种通过网络从远程计算机程序上请求服务,而不需要了解底层网络技术的协议。

    3.创建flume-flume-hdfs.conf

    配置上级Flume输出的Source,输出到HDFS的Sink。

    创建配置文件并打开

    [ck@hadoop102 group1]$ touch flume-flume-hdfs.conf
    [ck@hadoop102 group1]$ vim flume-flume-hdfs.conf

    添加如下内容:

    #Name the components on this agent
    a2.sources = r1
    a2.sinks = k1
    a2.channels = c1
    
    #Describe/configure the source
    a2.sources.r1.type = avro
    a2.sources.r1.bind = hadoop102
    a2.sources.r1.port = 4141
    
    #Describe the sink
    a2.sinks.k1.type = hdfs
    a2.sinks.k1.hdfs.path = hdfs://hadoop102:9000/flume-1.9.0/flume2/%Y%m%d/%H
    a2.sinks.k1.hdfs.filePrefix = flume2-
    a2.sinks.k1.hdfs.round = true
    a2.sinks.k1.hdfs.roundValue = 1
    a2.sinks.k1.hdfs.roundUnit = hour
    a2.sinks.k1.hdfs.useLocalTimeStamp = true
    a2.sinks.k1.hdfs.batchSize = 100
    a2.sinks.k1.hdfs.fileType = DataStream
    a2.sinks.k1.hdfs.rollInterval = 600
    a2.sinks.k1.hdfs.rollSize = 134217700
    a2.sinks.k1.hdfs.rollCount = 0
    a2.sinks.k1.hdfs.minBlockReplicas = 1
    
    #Use a channel which buffers events in memory
    a2.channels.c1.type = memory
    a2.channels.c1.capacity = 1000
    a2.channels.c1.transactionCapacity = 100
    
    #Bind the Source and sink to the channel
    a2.sources.r1.channels = c1
    a2.sinks.k1.channel = c1

    4.创建flume-flume-dir.conf

    配置上级Flume输出的Source,输出是本地目录的Sink。

    创建配置文件并打开

     [ck@hadoop102 group1]$ touch flume-flume-dir.conf
     [ck@hadoop102 group1]$ vim flume-flume-dir.conf

    添加如下内容

    #Name the components on this agent
    a3.sources = r1
    a3.sinks = k1
    a3.channels = c2
    
    #Describe/configure the source
    a3.sources.r1.type = avro
    a3.sources.r1.bind = hadoop102
    a3.sources.r1.port = 4142
    
    #Describe the sink
    a3.sinks.k1.type = file_roll
    a3.sinks.k1.sink.directory = /opt/module/datas/flume3
    
    #Use a channel which buffers events in memory
    a3.channels.c2.type = memory
    a3.channels.c2.capacity = 1000
    a3.channels.c2.transactionCapacity = 100
    
    #Bind the Source and sink to the channel
    a3.sources.r1.channels = c2
    a3.sinks.k1.channel = c2

    提示:输出的本地目录必须是已经存在的目录,如果该目录不存在,并不会创建新的目录。

    5.执行配置文件

    分别开启对应配置文件:flume-flume-dir,flume-flume-hdfs,flume-file-flume。

    [ck@hadoop102 flume]$ bin/flume-ng agent -–name a3 -–conf conf/ -–conf-file job/group1/flume-flume-dir.conf 
    [ck@hadoop102 flume]$ bin
    /flume-ng agent -–name a2 -–conf conf/ -–conf-file job/group1/flume-flume-hdfs.conf
    [ck@hadoop102 flume]$ bin
    /flume-ng agent -–name a1 -–conf conf/ –-conf-file job/group1/flume-file-flume.conf

    6.启动Hadoop和Hive

    [ck@hadoop102 hadoop-2.7.2]$ sbin/start-dfs.sh
    [ck@hadoop103 hadoop-2.7.2]$ sbin/start-yarn.sh
    
    [ck@hadoop102 hive]$ bin/hive
    hive (default)>

    7. 检查HDFS上的数据

     

    8. 检查/opt/module/datas/flume3目录中的数据

    [ck@hadoop102 flume3]$ ll
    总用量 8
    -rw-rw-r–. 1 ck ck 5942 5月  22 00:09 1526918887550-3
  • 相关阅读:
    python RabbitMQ
    python IO多路复用版FTP
    python SelectPollEpoll异步IO与事件驱动
    python 同步与异步的性能区别及实例
    mysql学习笔记1---mysql ERROR 1045 (28000): 错误解决办法(续:深入分析)
    mysql学习笔记1---mysql ERROR 1045 (28000): 错误解决办法
    Ubuntu 安装HBase
    微博excel数据清洗(Java版)
    hadoop之mapreduce编程实例(系统日志初步清洗过滤处理)
    MapReduce编程实例6
  • 原文地址:https://www.cnblogs.com/zs-chenkang/p/14548137.html
Copyright © 2011-2022 走看看