zoukankan      html  css  js  c++  java
  • 搞定springboot项目连接远程服务器上kafka遇到的坑以及完整的例子

    版本

    springboot 2.1.5.RELEASE
    kafka 2.2

    遇到的坑

    1. 用最新的springboot就要用最新的kafka版本!
    2. 当我启动云服务器上的zk后,再启动kafka后台日志也没报错,只感觉EndPoint日志信息有点奇怪,然后springboot项目连接kafka,老是有warn级别的日志:"Connection to node -1 could not be established. Broker may not be available.",这是未连接上kafka
    3. springboot项目控制台抛出ip地址不合法的异常。

    telnet一下云服务器的9092端口没有响应,然后看云服务器安全组里也添加了啊,netstat也看到9092被监听,到底咋回事?

    原来是kafka配置文件的问题,导致9092端口未被正确监听,ip地址的问题就是要绑定kafka服务器的ip地址。

    注意下面红色三项配置很重要,解决了我所有的问题!

    advertised.host.name必须写kafka服务器的ip地址!如果写localhost,并且项目运行的服务器和kafka运行的不是同一台服务器,会连接不上。

    将kafka服务端的配置文件修改如下:

    ############################# Server Basics #############################
    
    # The id of the broker. This must be set to a unique integer for each broker.
    #broker的全局唯一编号,不能重复
    broker.id=0
    
    ############################# Socket Server Settings #############################
    
    #监听的端口
    listeners=PLAINTEXT://:9092
    # 客户端连接的ip地址,必须要写成服务器的ip地址!advertised.host.name
    advertised.host.name = 47.XX.XX.XX 
    host.name=localhost
    
    # Maps listener names to security protocols, the default is for them to be the same. See the config documentation for more details
    #listener.security.protocol.map=PLAINTEXT:PLAINTEXT,SSL:SSL,SASL_PLAINTEXT:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
    
    # The number of threads that the server uses for receiving requests from the network and sending responses to the network
    num.network.threads=3
    
    # The number of threads that the server uses for processing requests, which may include disk I/O
    num.io.threads=8
    
    # The send buffer (SO_SNDBUF) used by the socket server
    socket.send.buffer.bytes=102400
    
    # The receive buffer (SO_RCVBUF) used by the socket server
    socket.receive.buffer.bytes=102400
    
    # The maximum size of a request that the socket server will accept (protection against OOM)
    socket.request.max.bytes=104857600
    
    
    ############################# Log Basics #############################
    
    # A comma separated list of directories under which to store log files
    log.dirs=/root/mysoftware/kafka_2.12-2.2.0/logs
    
    # The default number of log partitions per topic. More partitions allow greater
    # parallelism for consumption, but this will also result in more files across
    # the brokers.
    num.partitions=1
    
    # The number of threads per data directory to be used for log recovery at startup and flushing at shutdown.
    # This value is recommended to be increased for installations with data dirs located in RAID array.
    num.recovery.threads.per.data.dir=1
    
    ############################# Internal Topic Settings  #############################
    # The replication factor for the group metadata internal topics "__consumer_offsets" and "__transaction_state"
    # For anything other than development testing, a value greater than 1 is recommended for to ensure availability such as 3.
    offsets.topic.replication.factor=1
    transaction.state.log.replication.factor=1
    transaction.state.log.min.isr=1
    
    ############################# Log Flush Policy #############################
    
    # Messages are immediately written to the filesystem but by default we only fsync() to sync
    # the OS cache lazily. The following configurations control the flush of data to disk.
    # There are a few important trade-offs here:
    #    1. Durability: Unflushed data may be lost if you are not using replication.
    #    2. Latency: Very large flush intervals may lead to latency spikes when the flush does occur as there will be a lot of data to flush.
    #    3. Throughput: The flush is generally the most expensive operation, and a small flush interval may lead to excessive seeks.
    # The settings below allow one to configure the flush policy to flush data after a period of time or
    # every N messages (or both). This can be done globally and overridden on a per-topic basis.
    
    # The number of messages to accept before forcing a flush of data to disk
    #log.flush.interval.messages=10000
    
    # The maximum amount of time a message can sit in a log before we force a flush
    #log.flush.interval.ms=1000
    
    ############################# Log Retention Policy #############################
    
    # The following configurations control the disposal of log segments. The policy can
    # be set to delete segments after a period of time, or after a given size has accumulated.
    # A segment will be deleted whenever *either* of these criteria are met. Deletion always happens
    # from the end of the log.
    
    # The minimum age of a log file to be eligible for deletion due to age
    log.retention.hours=168
    
    # A size-based retention policy for logs. Segments are pruned from the log unless the remaining
    # segments drop below log.retention.bytes. Functions independently of log.retention.hours.
    #log.retention.bytes=1073741824
    
    # The maximum size of a log segment file. When this size is reached a new log segment will be created.
    log.segment.bytes=1073741824
    
    # The interval at which log segments are checked to see if they can be deleted according
    # to the retention policies
    log.retention.check.interval.ms=300000
    
    ############################# Zookeeper #############################
    
    # Zookeeper connection string (see zookeeper docs for details).
    # This is a comma separated host:port pairs, each corresponding to a zk
    # server. e.g. "127.0.0.1:3000,127.0.0.1:3001,127.0.0.1:3002".
    # You can also append an optional chroot string to the urls to specify the
    # root directory for all kafka znodes.
    zookeeper.connect=localhost:2181
    
    # Timeout in ms for connecting to zookeeper
    zookeeper.connection.timeout.ms=6000
    
    
    ############################# Group Coordinator Settings #############################
    
    # The following configuration specifies the time, in milliseconds, that the GroupCoordinator will delay the initial consumer rebalance.
    # The rebalance will be further delayed by the value of group.initial.rebalance.delay.ms as new members join the group, up to a maximum of max.poll.interval.ms.
    # The default value for this is 3 seconds.
    # We override this to 0 here as it makes for a better out-of-the-box experience for development and testing.
    # However, in production environments the default value of 3 seconds is more suitable as this will help to avoid unnecessary, and potentially expensive, rebalances during application startup.
    group.initial.rebalance.delay.ms=0

    代码

    pom.xml

    <?xml version="1.0" encoding="UTF-8"?>
    <project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
        xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
        <modelVersion>4.0.0</modelVersion>
        <parent>
            <groupId>org.springframework.boot</groupId>
            <artifactId>spring-boot-starter-parent</artifactId>
            <version>2.1.5.RELEASE</version>
            <relativePath/> <!-- lookup parent from repository -->
        </parent>
        <groupId>xy.study</groupId>
        <artifactId>kafka-demo</artifactId>
        <version>0.0.1-SNAPSHOT</version>
        <name>kafka-demo</name>
        <description>Kafka demo project for Spring Boot</description>
    
        <properties>
            <java.version>1.8</java.version>
        </properties>
    
        <dependencies>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter</artifactId>
            </dependency>
            <dependency>
                <groupId>org.springframework.kafka</groupId>
                <artifactId>spring-kafka</artifactId>
            </dependency>
    
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-devtools</artifactId>
                <scope>runtime</scope>
            </dependency>
            <dependency>
                <groupId>com.alibaba</groupId>
                <artifactId>fastjson</artifactId>
                <version>1.2.47</version>
            </dependency>
    
            <dependency>
                <groupId>org.projectlombok</groupId>
                <artifactId>lombok</artifactId>
                <optional>true</optional>
            </dependency>
            <dependency>
                <groupId>org.springframework.boot</groupId>
                <artifactId>spring-boot-starter-test</artifactId>
                <scope>test</scope>
            </dependency>
            <dependency>
                <groupId>org.springframework.kafka</groupId>
                <artifactId>spring-kafka-test</artifactId>
                <scope>test</scope>
            </dependency>
        </dependencies>
    
        <build>
            <plugins>
                <plugin>
                    <groupId>org.springframework.boot</groupId>
                    <artifactId>spring-boot-maven-plugin</artifactId>
                </plugin>
            </plugins>
        </build>
    
    </project>

    application.properties

    #============== kafka ===================
    # 指定kafka 代理地址,可以多个
    spring.kafka.bootstrap-servers=47.XX.XX.XX:9092
    
    #=============== provider  =======================
    
    spring.kafka.producer.retries=0
    # 每次批量发送消息的数量
    spring.kafka.producer.batchSize=16384
    spring.kafka.producer.bufferMemory=33554432
    
    # 指定消息key和消息体的编解码方式
    spring.kafka.producer.key-serializer=org.apache.kafka.common.serialization.StringSerializer
    spring.kafka.producer.value-serializer=org.apache.kafka.common.serialization.StringSerializer
    
    #=============== consumer  =======================
    # 指定默认消费者group id
    spring.kafka.consumer.group-id=consumer-group-test
    
    spring.kafka.consumer.auto-offset-reset=earliest
    spring.kafka.consumer.enable-auto-commit=true
    spring.kafka.consumer.auto-commit-interval=100
    
    # 指定消息key和消息体的编解码方式
    spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.StringDeserializer
    spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.StringDeserializer

    生产者和消费者

    @Component
    @Slf4j
    public class KafkaProducer {
    
        @Autowired
        private KafkaTemplate<String, String> kafkaTemplate;
    
    
        public void sendADotaHero() {
            DotaHero dotaHero = new DotaHero("虚空假面", "敏捷", "男");
    
            ListenableFuture<SendResult<String, String>> future = kafkaTemplate.send(KafkaTopic.A_DOTA_HERO, JSONObject.toJSONString(dotaHero));
    
            future.addCallback(new ListenableFutureCallback<SendResult<String, String>>() {
                @Override
                public void onFailure(Throwable throwable) {
                    log.error("kafka sendMessage error, throwable = {}, topic = {}, data = {}", throwable, KafkaTopic.A_DOTA_HERO, dotaHero);
                }
    
                @Override
                public void onSuccess(SendResult<String, String> stringDotaHeroSendResult) {
                    log.info("kafka sendMessage success topic = {}, data = {}",KafkaTopic.A_DOTA_HERO, dotaHero);
                }
            });
    
            log.info("kafka sendMessage end");
    
        }
    
    }
    @Slf4j
    @Component
    public class KafkaConsumer {
    
        @KafkaListener(topics = KafkaTopic.A_DOTA_HERO, groupId = "${spring.kafka.consumer.group-id}")
        private void kafkaConsumer(ConsumerRecord<String, DotaHero> consumerRecord) {
    
            log.info("kafkaConsumer: topic = {}, msg = {}", consumerRecord.topic(), consumerRecord.value());
    
        }
    }
    @Data
    @AllArgsConstructor
    @NoArgsConstructor
    public class DotaHero {
    
        private String name;
        private String kind;
        private String sex;
    
        /**
         * 返回一个不同元素的数组
         * @return
         */
        public static List<DotaHero> bulidDiffObjectList(){
            List<DotaHero> list = new ArrayList<>();
            list.add(new DotaHero("影魔", "敏捷", "男"));
            list.add(new DotaHero("小黑", "敏捷", "女"));
            list.add(new DotaHero("马尔斯", "力量", "男"));
    
            return list;
        }
    }
    public class KafkaTopic {
        public static final String A_DOTA_HERO = "a_dota_hero";
    
    
        private KafkaTopic() {
        }
    }

    测试

    当启动完springboot项目后,再运行test启动生产者:

    @Slf4j
    @RunWith(SpringRunner.class)
    @SpringBootTest
    public class KafkaDemoApplicationTests {
    
        @Autowired
        private KafkaProducer kafkaProducer;
    
        private Clock clock = Clock.systemDefaultZone();
        private long begin;
        private long end;
    
        @Before
        public void init(){
    
    
            begin = clock.millis();
        }
    
        @Test
        public void send(){
            kafkaProducer.sendADotaHero();
        }
    
        @After
        public void end(){
            end = clock.millis();
            log.info("Spend {} millis .", end-begin);
        }
    
    }
  • 相关阅读:
    阅读笔记之FastDepth: Fast Monocular Depth Estimation on Embedded Systems
    AverageMeter一个用来记录和更新变量的工具
    Python中log的简单粗暴的设置和使用
    linux 常用命令
    flutter常用组件总结
    Docker 微服务教程
    Docker 入门教程
    Activiti工作流学习分享
    CentOS7 修改主机名
    Linux中 2>&1 的含义
  • 原文地址:https://www.cnblogs.com/theRhyme/p/10932412.html
Copyright © 2011-2022 走看看