zoukankan      html  css  js  c++  java
  • 离线电商数仓(八)之用户行为数据采集(八)组件安装(四)采集日志Flume

    0 简介

    Flume 采集

    1 日志采集Flume安装

    集群规划

    服务器hadoop102

    服务器hadoop103

    服务器hadoop104

    Flume(采集日志)

    Flume

    Flume

     

    2 项目经验Flume组件

    1Source

    1Taildir Source相比Exec SourceSpooling Directory Source的优势

    TailDir Source断点续传、多目录。Flume1.6以前需要自己自定义Source记录每次读取文件位置,实现断点续传。

    Exec Source可以实时搜集数据,但是在Flume不运行或者Shell命令出错的情况下,数据将会丢失。

    Spooling Directory Source监控目录,不支持断点续传。

    2batchSize大小如何设置?

    答:Event 1K左右时,500-1000合适(默认为100)

    2Channel

    采用Kafka Channel省去了Sink,提高了效率。

    3 日志采集Flume配置

    1)Flume配置分析

    Flume直接log日志的数据,log日志的格式是app-yyyy-mm-dd.log

    2)Flume的具体配置如下:

    1)在/opt/module/flume/conf目录下创建file-flume-kafka.conf文件

    [atguigu@hadoop102 conf]$ vim file-flume-kafka.conf

    文件配置如下内容

    a1.sources=r1
    a1.channels=c1 c2
    
    # configure source
    a1.sources.r1.type = TAILDIR
    a1.sources.r1.positionFile = /opt/module/flume/test/log_position.json
    a1.sources.r1.filegroups = f1
    a1.sources.r1.filegroups.f1 = /tmp/logs/app.+
    a1.sources.r1.fileHeader = true
    a1.sources.r1.channels = c1 c2
    
    #interceptor
    a1.sources.r1.interceptors =  i1 i2
    a1.sources.r1.interceptors.i1.type = com.atguigu.flume.interceptor.LogETLInterceptor$Builder
    a1.sources.r1.interceptors.i2.type = com.atguigu.flume.interceptor.LogTypeInterceptor$Builder
    
    a1.sources.r1.selector.type = multiplexing
    a1.sources.r1.selector.header = topic
    a1.sources.r1.selector.mapping.topic_start = c1
    a1.sources.r1.selector.mapping.topic_event = c2
    
    # configure channel
    a1.channels.c1.type = org.apache.flume.channel.kafka.KafkaChannel
    a1.channels.c1.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
    a1.channels.c1.kafka.topic = topic_start
    a1.channels.c1.parseAsFlumeEvent = false
    a1.channels.c1.kafka.consumer.group.id = flume-consumer
    
    a1.channels.c2.type = org.apache.flume.channel.kafka.KafkaChannel
    a1.channels.c2.kafka.bootstrap.servers = hadoop102:9092,hadoop103:9092,hadoop104:9092
    a1.channels.c2.kafka.topic = topic_event
    a1.channels.c2.parseAsFlumeEvent = false
    a1.channels.c2.kafka.consumer.group.id = flume-consumer

    注意:com.atguigu.flume.interceptor.LogETLInterceptor和com.atguigu.flume.interceptor.LogTypeInterceptor是自定义的拦截器的全类名。需要根据用户自定义的拦截器做相应修改。

    flume数据采集

    4 FlumeETL分类型拦截

    本项目自定义了两个拦截器分别是:ETL拦截器、日志类型区分拦截器

    ETL拦截器主要用于过滤时间戳不合法Json数据完整的日志

    日志类型区分拦截器主要用于,启动日志和事件日志区分开来,方便发往Kafka的不同Topic

    1创建Maven工程flume-interceptor

    2创建包名:com.atguigu.flume.interceptor

    3)在pom.xml文件中添加如下配置

    <dependencies>
        <dependency>
            <groupId>org.apache.flume</groupId>
            <artifactId>flume-ng-core</artifactId>
            <version>1.7.0</version>
        </dependency>
    </dependencies>
    
    <build>
        <plugins>
            <plugin>
                <artifactId>maven-compiler-plugin</artifactId>
                <version>2.3.2</version>
                <configuration>
                    <source>1.8</source>
                    <target>1.8</target>
                </configuration>
            </plugin>
            <plugin>
                <artifactId>maven-assembly-plugin</artifactId>
                <configuration>
                    <descriptorRefs>
                        <descriptorRef>jar-with-dependencies</descriptorRef>
                    </descriptorRefs>
                </configuration>
                <executions>
                    <execution>
                        <id>make-assembly</id>
                        <phase>package</phase>
                        <goals>
                            <goal>single</goal>
                        </goals>
                    </execution>
                </executions>
            </plugin>
        </plugins>
    </build>

    4)在com.atguigu.flume.interceptor包下创建LogETLInterceptor类名

    Flume ETL拦截器LogETLInterceptor

    package com.atguigu.flume.interceptor;
    
    import org.apache.flume.Context;
    import org.apache.flume.Event;
    import org.apache.flume.interceptor.Interceptor;
    
    import java.nio.charset.Charset;
    import java.util.ArrayList;
    import java.util.List;
    
    public class LogETLInterceptor implements Interceptor {
    
        @Override
        public void initialize() {
    
        }
    
        @Override
        public Event intercept(Event event) {
    
            // 1 获取数据
            byte[] body = event.getBody();
            String log = new String(body, Charset.forName("UTF-8"));
    
            // 2 判断数据类型并向Header中赋值
            if (log.contains("start")) {
                if (LogUtils.validateStart(log)){
                    return event;
                }
            }else {
                if (LogUtils.validateEvent(log)){
                    return event;
                }
            }
    
            // 3 返回校验结果
            return null;
        }
    
        @Override
        public List<Event> intercept(List<Event> events) {
    
            ArrayList<Event> interceptors = new ArrayList<>();
    
            for (Event event : events) {
                Event intercept1 = intercept(event);
    
                if (intercept1 != null){
                    interceptors.add(intercept1);
                }
            }
    
            return interceptors;
        }
    
        @Override
        public void close() {
    
        }
    
        public static class Builder implements Interceptor.Builder{
    
            @Override
            public Interceptor build() {
                return new LogETLInterceptor();
            }
    
            @Override
            public void configure(Context context) {
    
            }
        }
    }
    View Code

    5)Flume日志过滤工具类

    package com.atguigu.flume.interceptor;
    import org.apache.commons.lang.math.NumberUtils;
    
    public class LogUtils {
    
        public static boolean validateEvent(String log) {
            // 服务器时间 | json
            // 1549696569054 | {"cm":{"ln":"-89.2","sv":"V2.0.4","os":"8.2.0","g":"M67B4QYU@gmail.com","nw":"4G","l":"en","vc":"18","hw":"1080*1920","ar":"MX","uid":"u8678","t":"1549679122062","la":"-27.4","md":"sumsung-12","vn":"1.1.3","ba":"Sumsung","sr":"Y"},"ap":"weather","et":[]}
    
            // 1 切割
            String[] logContents = log.split("\|");
    
            // 2 校验
            if(logContents.length != 2){
                return false;
            }
    
            //3 校验服务器时间
            if (logContents[0].length()!=13 || !NumberUtils.isDigits(logContents[0])){
                return false;
            }
    
            // 4 校验json
            if (!logContents[1].trim().startsWith("{") || !logContents[1].trim().endsWith("}")){
                return false;
            }
    
            return true;
        }
    
        public static boolean validateStart(String log) {
     // {"action":"1","ar":"MX","ba":"HTC","detail":"542","en":"start","entry":"2","extend1":"","g":"S3HQ7LKM@gmail.com","hw":"640*960","l":"en","la":"-43.4","ln":"-98.3","loading_time":"10","md":"HTC-5","mid":"993","nw":"WIFI","open_ad_type":"1","os":"8.2.1","sr":"D","sv":"V2.9.0","t":"1559551922019","uid":"993","vc":"0","vn":"1.1.5"}
    
            if (log == null){
                return false;
            }
    
            // 校验json
            if (!log.trim().startsWith("{") || !log.trim().endsWith("}")){
                return false;
            }
    
            return true;
        }
    }
    View Code

    6)Flume日志类型区分拦截器LogTypeInterceptor

    package com.atguigu.flume.interceptor;
    
    import org.apache.flume.Context;
    import org.apache.flume.Event;
    import org.apache.flume.interceptor.Interceptor;
    
    import java.nio.charset.Charset;
    import java.util.ArrayList;
    import java.util.List;
    import java.util.Map;
    
    public class LogTypeInterceptor implements Interceptor {
        @Override
        public void initialize() {
    
        }
    
        @Override
        public Event intercept(Event event) {
    
            // 区分日志类型:   body  header
            // 1 获取body数据
            byte[] body = event.getBody();
            String log = new String(body, Charset.forName("UTF-8"));
    
            // 2 获取header
            Map<String, String> headers = event.getHeaders();
    
            // 3 判断数据类型并向Header中赋值
            if (log.contains("start")) {
                headers.put("topic","topic_start");
            }else {
                headers.put("topic","topic_event");
            }
    
            return event;
        }
    
        @Override
        public List<Event> intercept(List<Event> events) {
    
            ArrayList<Event> interceptors = new ArrayList<>();
    
            for (Event event : events) {
                Event intercept1 = intercept(event);
    
                interceptors.add(intercept1);
            }
    
            return interceptors;
        }
    
        @Override
        public void close() {
    
        }
    
        public static class Builder implements  Interceptor.Builder{
    
            @Override
            public Interceptor build() {
                return new LogTypeInterceptor();
            }
    
            @Override
            public void configure(Context context) {
    
            }
        }
    }
    View Code

    7)打包

    拦截器打包之后,只需要单独包,不需要依赖的包上传。打包之后要放入Flumelib文件夹下面。

    注意为什么不需要依赖包?因为依赖包在flumelib目录下面已经存在了。

    8)需要先将打好的包放入到hadoop102的/opt/module/flume/lib文件夹下面。

    [atguigu@hadoop102 lib]$ ls | grep interceptor
    flume-interceptor-1.0-SNAPSHOT.jar

    8)分发Flumehadoop103、hadoop104

    [atguigu@hadoop102 module]$ xsync flume/
    
    [atguigu@hadoop102 flume]$ bin/flume-ng agent --name a1 --conf-file conf/file-flume-kafka.conf &

    5 日志采集Flume启动停止脚本

    1)在/home/atguigu/bin目录下创建脚本f1.sh

    [atguigu@hadoop102 bin]$ vim f1.sh

    脚本中填写如下内容

    !/bin/bash
    #使用start启动脚本,使用stop停止脚本
    if (($#!=1))
    then
            echo 请输入start或stop!
            exit;
    fi
    #定义cmd用来保存要执行的命令
    cmd=cmd
    if [ $1 = start ]
    then
            cmd="source /etc/profile;nohup flume-ng agent -c $FLUME_HOME/conf/ -n a1 -f $FLUME_HOME/myagents/f1.conf -Dflume.root.logger=DEBUG,console > /home/atguigu/f1.log 2>&1 &"
            elif [ $1 = stop ]
                    then
                            cmd="ps -ef  | grep f1.conf | grep -v grep | awk  '{print $2}' | xargs kill -9"
            else
                    echo 请输入start或stop!
    fi
    
    #在hadoop102和hadoop103开启采集
    for i in hadoop102 hadoop103
    do
            ssh $i $cmd
    done

    说明1nohup,该命令可以在你退出帐户/关闭终端之后继续运行相应的进程nohup就是不挂起的意思不挂断地运行命令

    说明2/dev/null代表linux的空设备文件,所有往这个文件里面写入的内容都会丢失,俗称“黑洞”。

    标准输入0:从键盘获得输入 /proc/self/fd/0

    标准输出1:输出到屏幕(即控制台) /proc/self/fd/1

    错误输出2:输出到屏幕(即控制台) /proc/self/fd/2

    2)增加脚本执行权限

    [atguigu@hadoop102 bin]$ chmod 777 f1.sh

    3)f1集群启动脚本

    [atguigu@hadoop102 module]$ f1.sh start

    4)f1集群停止脚本

    [atguigu@hadoop102 module]$ f1.sh stop
  • 相关阅读:
    Session_End引发的性能问题!
    可能引发性能问题的几个写法,看看你占哪一个.
    优化你的DiscuzNT3.0,让它跑起来(2)发帖回帖篇
    什么是经济学
    生产可能性边界和机会成本
    九宫格的实现(转)
    LAMP的安装和配置
    iPhone 开发过程中的一些小技术的总结(转)
    有效的利用资源边际成本与边际利益
    Dijkstra算法(注:单源最短路径的贪心算法)和数学归纳法<转>
  • 原文地址:https://www.cnblogs.com/qiu-hua/p/13504492.html
Copyright © 2011-2022 走看看