zoukankan      html  css  js  c++  java
  • 使用kafka作为生产者生产数据_到_hbase

    配置文件:

    agent.sources = r1
    agent.sinks = k1
    agent.channels = c1

    ## sources config
    agent.sources.r1.type = org.apache.flume.source.kafka.KafkaSource
    agent.sources.r1.kafka.bootstrap.servers = 192.168.80.128:9092,192.168.80.129:9092,192.168.80.130:9092
    agent.sources.r1.kafka.topics =1713
    agent.sources.r1.migrateZookeeperOffsets =false
    agent.sources.r1.kafka.consumer.timeout.ms = 1000
    #agent.sources.r1.kafka.consumer.group.id = consumer-group

    ## channels config
    agent.channels.c1.type = memory
    agent.channels.c1.capacity = 1000
    agent.channels.c1.transactionCapacity = 100
    agent.channels.c1.byteCapacityBufferPercentage = 60
    agent.channels.c1.byteCapacity = 1280
    agent.channels.c1.keep-alive = 60


    # Describe the sink
    agent.sinks.k1.type =asynchbase //sink类型到hbase
    agent.sinks.k1.table = tb_words3 //表
    agent.sinks.k1.columnFamily = words //列族
    agent.sinks.k1.serializer.payloadColumn=wd //列名
    agent.sinks.k1.serializer =org.apache.flume.sink.hbase.SimpleAsyncHbaseEventSerializer

    # Use a channel which buffers events in memory
    agent.channels.c1.type = memory
    agent.channels.c1.capacity = 1000
    agent.channels.c1.transactionCapacity = 100

    # Bind the source and sink to the channel
    agent.sources.r1.channels = c1
    agent.sinks.k1.channel = c1

    成就人
  • 相关阅读:
    Selenium 验证
    Flask 拓展(flask-admin)
    读取 xlsx中数据
    OSS 上传内容
    Tornado 端口绑定方式
    Tornado 基础
    Flask 懒人版分页(未完善)
    kafka事务原理、事务API和使用场景(转)
    jvm几种垃圾回收机制小结
    理解Semaphore及其用法详解(转)  -实现一个文件允许的并发访问数
  • 原文地址:https://www.cnblogs.com/pingzizhuanshu/p/9102587.html
Copyright © 2011-2022 走看看