首页
学习
活动
专区
圈层
工具
发布
首页
学习
活动
专区
圈层
工具
MCP广场
社区首页 >问答首页 >连接到docker swarm中的kafka多代理时发生Debezium错误

连接到docker swarm中的kafka多代理时发生Debezium错误
EN

Stack Overflow用户
提问于 2018-08-03 18:48:25
回答 1查看 731关注 0票数 0

当我用这个堆栈设置我的Swarm时:Kafka(多代理),zookeeper,debezium。Kafka和zookeeper正在工作,可以创建topic、consumer和producer,但debezium显示错误: org.apache.kafka.connect.errors.ConnectException:无法连接和描述Kafka集群。检查worker的代理连接和安全属性。我没有修改任何东西,只是默认配置为docker-stack:

代码语言:javascript
运行
复制
version: '3.6'
services:
   zoo:
      image: wurstmeister/zookeeper
      ports:
         - '2181:2181'
      volumes:
         - zoo-data:/tmp/zookeeper
      deploy:
         replicas: 1
         placement:
            constraints:
               - node.labels.type==zoo
   kafka:
      image: wurstmeister/kafka:latest
      ports:
         - target: 9094
           published: 9094
           protocol: tcp
           mode: host
      environment:
         HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2"
         KAFKA_ZOOKEEPER_CONNECT: zoo:2181
         KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
         KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
         KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
         KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
         #KAFKA_CREATE_TOPICS: "Topic1:1:2,Topic2:1:1:compact"
      volumes:
         - /var/run/docker.sock:/var/run/docker.sock
         - kafka-data:/tmp/kafka-logs
      deploy:
         mode: global
         placement:
            constraints:
               - node.labels.name==kafka
      depends_on:
         - zoo

   debezium:
      image: debezium/connect:0.8
      hostname: connect
      ports:
         - '8083:8083'
      environment:
         BOOTSTRAP_SERVERS: kafka:9094
         GROUP_ID: 1
         CONFIG_STORAGE_TOPIC: my_connect_configs
         OFFSET_STORAGE_TOPIC: my_connect_offsets
      deploy:
         placement:
            constraints:
               - node.labels.type==dbz
      depends_on:
         - kafka
volumes:
   kafka-data:
   zoo-data:

当我检查docker服务日志debezium时,它显示错误

代码语言:javascript
运行
复制
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    | 2018-08-03 04:33:27,034 ERROR  ||  Stopping due to error   [org.apache.kafka.connect.cli.ConnectDistributed]
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    | org.apache.kafka.connect.errors.ConnectException: Failed to connect to and describe Kafka cluster. Check worker's broker connection and security properties.
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:64)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:45)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.connect.cli.ConnectDistributed.main(ConnectDistributed.java:77)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    | Caused by: java.util.concurrent.ExecutionException: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.common.internals.KafkaFutureImpl.wrapAndThrow(KafkaFutureImpl.java:45)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.common.internals.KafkaFutureImpl.access$000(KafkaFutureImpl.java:32)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.common.internals.KafkaFutureImpl$SingleWaiter.await(KafkaFutureImpl.java:89)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.common.internals.KafkaFutureImpl.get(KafkaFutureImpl.java:258)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    at org.apache.kafka.connect.util.ConnectUtils.lookupKafkaClusterId(ConnectUtils.java:58)
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    |    ... 2 more
shippo_kafka_debezium.1.5l1yhz27r6p2@kafka1    | Caused by: org.apache.kafka.common.errors.TimeoutException: Timed out waiting to send the call.

有人能告诉我如何修复这个错误吗?我刚接触这个堆栈,所以在几天的研究中,我找不到它。非常感谢!

EN

回答 1

Stack Overflow用户

发布于 2019-08-21 23:14:25

您可以将此行HOSTNAME_COMMAND: "docker info | grep ^Name: | cut -d' ' -f 2"更改为HOSTNAME_COMMAND: "docker info | grep 'Node Address:' | cut -d' ' -f 4"

或者,您可以使用此docker-compose文件

代码语言:javascript
运行
复制
version: '3.2'
services:
  zookeeper:
    image: wurstmeister/zookeeper
    ports:
      - "2181:2181"
    deploy:
       mode: global
    volumes:
      - /shared_data/zoo1/data:/data
      - /shared_data/zoo1/datalog:/datalog
    environment:
        ZOO_MY_ID: 1
        ZOO_PORT: 2181
        ZOO_SERVERS: server.1=zookeeper:2888:3888

  kafka:
    image: wurstmeister/kafka:latest
    ports:
      - target: 9094
        published: 9094
        protocol: tcp
        mode: host
    deploy:
      mode: global
    environment:
      HOSTNAME_COMMAND: "docker info | grep 'Node Address:' | cut -d' ' -f 4"
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: INSIDE:PLAINTEXT,OUTSIDE:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: INSIDE://:9092,OUTSIDE://_{HOSTNAME_COMMAND}:9094
      KAFKA_LISTENERS: INSIDE://:9092,OUTSIDE://:9094
      KAFKA_INTER_BROKER_LISTENER_NAME: INSIDE
    volumes:
      - /var/run/docker.sock:/var/run/docker.sock
      - /shared_data/kafka:/var/lib/kafka/data
    depends_on:
      - zookeeper
票数 1
EN
页面原文内容由Stack Overflow提供。腾讯云小微IT领域专用引擎提供翻译支持
原文链接:

https://stackoverflow.com/questions/51670975

复制
相关文章

相似问题

领券
问题归档专栏文章快讯文章归档关键词归档开发者手册归档开发者手册 Section 归档