zoukankan      html  css  js  c++  java
  • Kafka- Spark消费Kafka

    在高版本的API中

    val brokers = properties.getProperty("kafka.host.list")
    val topics = Set(properties.getProperty("kafka.application.topic"))
    val kafkaParams = Map[String, String](
      "bootstrap.servers"           -> brokers,
      "group.id"                    -> "ntaflowgroup",
      "auto.commit.interval.ms"     -> "1000",
      "key.deserializer"            -> "org.apache.kafka.common.serialization.StringDeserializer",
      "value.deserializer"          -> "org.apache.kafka.common.serialization.StringDeserializer",
      "auto.offset.reset"           -> "latest",
      "enable.auto.commit"          -> "true"
    )
    val ntaflowCache: InputDStream[ConsumerRecord[String, String]] = KafkaUtils.createDirectStream[String, String]( ssc, PreferConsistent, Subscribe[String, String](topics, kafkaParams) )
  • 相关阅读:
    排座椅
    关于math.h的问题
    客户调查
    排队打水
    删数游戏
    小数背包
    零件分组
    桐桐的组合
    桐桐的数学游戏
    桐桐的全排列
  • 原文地址:https://www.cnblogs.com/RzCong/p/8630043.html
Copyright © 2011-2022 走看看