这是用户在 2024-10-27 14:32 为 https://hub.docker.com/r/bitnami/kafka 保存的双语快照页面,由 沉浸式翻译 提供双语支持。了解如何保存?
ctrl+K
kafka logo

bitnami/kafka

Verified Publisher 经过验证的发布者
894

By VMware VMware 提供

Updated 3 days ago 更新3 天前

Bitnami container image for Apache Kafka
适用于 Apache Kafka 的 Bitnami 容器映像

Image 图像
Integration & Delivery 集成与交付
Message Queues 消息队列
Monitoring & Observability
监控和可观察性
Pulls 

100M+ 100百万+

Bitnami package for Apache Kafka
适用于 Apache Kafka 的 Bitnami 软件包

What is Apache Kafka? 什么是 Apache Kafka?

Apache Kafka is a distributed streaming platform designed to build real-time pipelines and can be used as a message broker or as a replacement for a log aggregation solution for big data applications.
Apache Kafka 是一个分布式流平台,旨在构建实时管道,可用作消息代理或替代大数据应用程序的日志聚合解决方案。

Overview of Apache Kafka Trademarks: This software listing is packaged by Bitnami. The respective trademarks mentioned in the offering are owned by the respective companies, and use of them does not imply any affiliation or endorsement.
Apache Kafka 概述商标:此软件列表由 Bitnami 打包。产品中提及的相应商标归各自公司所有,使用它们并不意味着任何隶属关系或认可。

TL;DR TL;博士

docker run --name kafka bitnami/kafka:latest

Why use Bitnami Images? 为什么使用 Bitnami Images?

  • Bitnami closely tracks upstream source changes and promptly publishes new versions of this image using our automated systems.
    Bitnami 密切跟踪上游源更改,并使用我们的自动化系统及时发布此映像的新版本。
  • With Bitnami images the latest bug fixes and features are available as soon as possible.
    使用 Bitnami 映像,可以尽快提供最新的错误修复和功能。
  • Bitnami containers, virtual machines and cloud images use the same components and configuration approach - making it easy to switch between formats based on your project needs.
    Bitnami 容器、虚拟机和云映像使用相同的组件和配置方法,从而可以根据您的项目需求轻松切换格式。
  • All our images are based on minideb -a minimalist Debian based container image that gives you a small base container image and the familiarity of a leading Linux distribution- or scratch -an explicitly empty image-.
    我们所有的映像都基于 minideb - 一个基于 Debian 的极简容器映像,它为您提供一个小型的基本容器映像和领先的 Linux 发行版的熟悉度 - 或 scratch - 一个明确的空映像。
  • All Bitnami images available in Docker Hub are signed with Notation. Check this post to know how to verify the integrity of the images.
    Docker Hub 中可用的所有 Bitnami 映像都使用 Notation 进行签名。查看这篇文章以了解如何验证图像的完整性。
  • Bitnami container images are released on a regular basis with the latest distribution packages available.
    Bitnami 容器映像会定期发布,并提供最新的分发包。

Looking to use Apache Kafka in production? Try VMware Tanzu Application Catalog, the commercial edition of the Bitnami catalog.
希望在生产环境中使用 Apache Kafka?试用 VMware Tanzu Application Catalog,这是 Bitnami 目录的商业版本。

How to deploy Apache Kafka in Kubernetes?
如何在 Kubernetes 中部署 Apache Kafka?

Deploying Bitnami applications as Helm Charts is the easiest way to get started with our applications on Kubernetes. Read more about the installation in the Bitnami Apache Kafka Chart GitHub repository.
将 Bitnami 应用程序部署为 Helm 图表是在 Kubernetes 上开始使用我们的应用程序的最简单方法。在 Bitnami Apache Kafka Chart GitHub 存储库中阅读有关安装的更多信息。

Bitnami containers can be used with Kubeapps for deployment and management of Helm Charts in clusters.
Bitnami 容器可以与 Kubeapps 一起使用,以便在集群中部署和管理 Helm Chart。

Why use a non-root container?
为什么使用非根容器?

Non-root container images add an extra layer of security and are generally recommended for production environments. However, because they run as a non-root user, privileged tasks are typically off-limits. Learn more about non-root containers in our docs.
非根容器映像增加了额外的安全层,通常建议用于生产环境。但是,由于特权任务以非 root 用户身份运行,因此通常是禁止的。在我们的文档中了解有关非根容器的更多信息。

Supported tags and respective Dockerfile links
支持的标签和相应的 Dockerfile 链接

Learn more about the Bitnami tagging policy and the difference between rolling tags and immutable tags in our documentation page.
在我们的文档页面中了解有关 Bitnami 标记策略以及滚动标签和不可变标签之间的区别的更多信息。

You can see the equivalence between the different tags by taking a look at the tags-info.yaml file present in the branch folder, i.e bitnami/ASSET/BRANCH/DISTRO/tags-info.yaml.
您可以通过查看分支文件夹中的 tags-info.yaml 文件(即 bitnami/ASSET/BRANCH/DISTRO/tags-info.yaml .

Subscribe to project updates by watching the bitnami/containers GitHub repo.
通过观看 bitnami/containers GitHub 存储库来订阅项目更新。

Get this image 获取此图像

The recommended way to get the Bitnami Apache Kafka Docker Image is to pull the prebuilt image from the Docker Hub Registry.
获取 Bitnami Apache Kafka Docker 镜像的推荐方法是从 Docker Hub 注册表中提取预构建的镜像。

docker pull bitnami/kafka:latest

To use a specific version, you can pull a versioned tag. You can view the list of available versions in the Docker Hub Registry.
要使用特定版本,您可以提取版本控制标签。您可以在 Docker Hub Registry 中查看可用版本列表

docker pull bitnami/kafka:[TAG]

If you wish, you can also build the image yourself by cloning the repository, changing to the directory containing the Dockerfile and executing the docker build command. Remember to replace the APP, VERSION and OPERATING-SYSTEM path placeholders in the example command below with the correct values.
如果需要,您还可以通过克隆存储库、更改为包含 Dockerfile 的目录并执行 docker build 命令来自行构建映像。请记住将以下示例命令中的 APPVERSIONOPERATING-SYSTEM 路径占位符替换为正确的值。

git clone https://github.com/bitnami/containers.git
cd bitnami/APP/VERSION/OPERATING-SYSTEM
docker build -t bitnami/APP:latest .

Persisting your data 持久保存数据

If you remove the container all your data and configurations will be lost, and the next time you run the image the database will be reinitialized. To avoid this loss of data, you should mount a volume that will persist even after the container is removed.
如果删除容器,则所有数据和配置都将丢失,下次运行映像时,数据库将重新初始化。为避免这种数据丢失,您应该挂载一个即使在删除容器后仍会保留的卷。

Note: If you have already started using your database, follow the steps on backing up and restoring to pull the data from your running container down to your host.
注意:如果您已经开始使用数据库,请按照备份和恢复中的步骤将数据从正在运行的容器拉取到主机。

The image exposes a volume at /bitnami/kafka for the Apache Kafka data. For persistence you can mount a directory at this location from your host. If the mounted directory is empty, it will be initialized on the first run.
该映像在 /bitnami/kafka 中为 Apache Kafka 数据公开了一个卷。为了实现持久性,您可以从主机在此位置挂载目录。如果挂载的目录为空,它将在第一次运行时初始化。

Using Docker Compose: 使用 Docker Compose:

This requires a minor change to the docker-compose.yml file present in this repository:
这需要对此存储库中的 docker-compose.yml 文件进行细微更改:

kafka:
  ...
  volumes:
    - /path/to/kafka-persistence:/bitnami/kafka
  ...

NOTE: As this is a non-root container, the mounted files and directories must have the proper permissions for the UID 1001.
注意:由于这是一个非根容器,因此挂载的文件和目录必须具有 UID 1001 的适当权限。

Connecting to other containers
连接到其他容器

Using Docker container networking, an Apache Kafka server running inside a container can easily be accessed by your application containers.
使用 Docker 容器网络,您的应用程序容器可以轻松访问在容器内运行的 Apache Kafka 服务器。

Containers attached to the same network can communicate with each other using the container name as the hostname.
连接到同一网络的容器可以使用容器名称作为主机名相互通信。

Using the Command Line

In this example, we will create an Apache Kafka client instance that will connect to the server instance that is running on the same docker network as the client.

Step 1: Create a network

docker network create app-tier --driver bridge

Step 2: Launch the Apache Kafka server instance

Use the --network app-tier argument to the docker run command to attach the Apache Kafka container to the app-tier network.

docker run -d --name kafka-server --hostname kafka-server \
    --network app-tier \
    -e KAFKA_CFG_NODE_ID=0 \
    -e KAFKA_CFG_PROCESS_ROLES=controller,broker \
    -e KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093 \
    -e KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT \
    -e KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka-server:9093 \
    -e KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER \
    bitnami/kafka:latest

Step 3: Launch your Apache Kafka client instance

Finally we create a new container instance to launch the Apache Kafka client and connect to the server created in the previous step:

docker run -it --rm \
    --network app-tier \
    bitnami/kafka:latest kafka-topics.sh --list  --bootstrap-server kafka-server:9092
Using a Docker Compose file

When not specified, Docker Compose automatically sets up a new network and attaches all deployed services to that network. However, we will explicitly define a new bridge network named app-tier. In this example we assume that you want to connect to the Apache Kafka server from your own custom application image which is identified in the following snippet by the service name myapp.

version: '2'

networks:
  app-tier:
    driver: bridge

services:
  kafka:
    image: 'bitnami/kafka:latest'
    networks:
      - app-tier
    environment:
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER
  myapp:
    image: 'YOUR_APPLICATION_IMAGE'
    networks:
      - app-tier

IMPORTANT:

  1. Please update the YOUR_APPLICATION_IMAGE placeholder in the above snippet with your application image
  2. In your application container, use the hostname kafka to connect to the Apache Kafka server

Launch the containers using:

docker-compose up -d

Configuration

Environment variables

Customizable environment variables

NameDescriptionDefault Value
KAFKA_MOUNTED_CONF_DIRKafka directory for mounted configuration files.${KAFKA_VOLUME_DIR}/config
${KAFKA_VOLUME_DIR}/配置
KAFKA_INTER_BROKER_USERKafka inter broker communication user.user 用户
KAFKA_INTER_BROKER_PASSWORDKafka inter broker communication password.bitnami
KAFKA_CONTROLLER_USERKafka control plane communication user.controller_user
KAFKA_CONTROLLER_PASSWORDKafka control plane communication password.bitnami
KAFKA_CERTIFICATE_PASSWORDPassword for certificates.nil 
KAFKA_TLS_TRUSTSTORE_FILEKafka truststore file location.nil 
KAFKA_TLS_TYPEChoose the TLS certificate format to use.JKS
KAFKA_TLS_CLIENT_AUTHConfigures kafka broker to request client authentication.required 必填
KAFKA_OPTSKafka deployment options.nil 
KAFKA_CFG_SASL_ENABLED_MECHANISMSKafka sasl.enabled.mechanisms configuration override.PLAIN,SCRAM-SHA-256,SCRAM-SHA-512
KAFKA_KRAFT_CLUSTER_IDKafka cluster ID when using Kafka Raft mode (KRaft).nil 
KAFKA_SKIP_KRAFT_STORAGE_INITIf set to true, skip Kraft storage initialization when process.roles are configured.false 
KAFKA_CLIENT_LISTENER_NAMEName of the listener intended to be used by clients, if set, configures the producer/consumer accordingly.nil 
KAFKA_ZOOKEEPER_PROTOCOLAuthentication protocol for Zookeeper connections. Allowed protocols: PLAINTEXT, SASL, SSL, and SASL_SSL.PLAINTEXT
KAFKA_ZOOKEEPER_PASSWORDKafka Zookeeper user password for SASL authentication.nil 
KAFKA_ZOOKEEPER_USERKafka Zookeeper user for SASL authentication.nil 
KAFKA_ZOOKEEPER_TLS_KEYSTORE_PASSWORDKafka Zookeeper keystore file password and key password.nil 
KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_PASSWORDKafka Zookeeper truststore file password.nil 
KAFKA_ZOOKEEPER_TLS_TRUSTSTORE_FILEKafka Zookeeper truststore file location.nil 
KAFKA_ZOOKEEPER_TLS_VERIFY_HOSTNAMEVerify Zookeeper hostname on TLS certificates.true 
KAFKA_ZOOKEEPER_TLS_TYPEChoose the TLS certificate format to use. Allowed values: JKS, PEM.JKS
KAFKA_CLIENT_USERSList of additional users to KAFKA_CLIENT_USER that will be created into Zookeeper when using SASL_SCRAM for client communications. Separated by commas, semicolons or whitespaces.user 用户
KAFKA_CLIENT_PASSWORDSPasswords for the users specified at KAFKA_CLIENT_USERS. Separated by commas, semicolons or whitespaces.bitnami
KAFKA_HEAP_OPTSKafka heap options for Java.-Xmx1024m -Xms1024m
-XMX1024米-XMS1024米

Read-only environment variables

NameDescriptionValue
KAFKA_BASE_DIRKafka installation directory.${BITNAMI_ROOT_DIR}/kafka
KAFKA_VOLUME_DIRKafka persistence directory./bitnami/kafka
KAFKA_DATA_DIRKafka directory where data is stored.${KAFKA_VOLUME_DIR}/data
KAFKA_CONF_DIRKafka configuration directory.${KAFKA_BASE_DIR}/config
KAFKA_CONF_FILEKafka configuration file.${KAFKA_CONF_DIR}/server.properties
KAFKA_CERTS_DIRKafka directory for certificate files.${KAFKA_CONF_DIR}/certs
KAFKA_INITSCRIPTS_DIRKafka directory for init scripts./docker-entrypoint-initdb.d
KAFKA_LOG_DIRDirectory where Kafka logs are stored.${KAFKA_BASE_DIR}/logs
KAFKA_HOMEKafka home directory.$KAFKA_BASE_DIR
KAFKA_DAEMON_USERKafka system user.kafka
KAFKA_DAEMON_GROUPKafka system group.kafka

Additionally, any environment variable beginning with KAFKA_CFG_ will be mapped to its corresponding Apache Kafka key. For example, use KAFKA_CFG_BACKGROUND_THREADS in order to set background.threads or KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE in order to configure auto.create.topics.enable.

docker run --name kafka -e KAFKA_CFG_PROCESS_ROLES ... -e KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true bitnami/kafka:latest

or by modifying the docker-compose.yml file present in this repository:

kafka:
  ...
  environment:
    - KAFKA_CFG_AUTO_CREATE_TOPICS_ENABLE=true
  ...
Apache Kafka development setup example

To use Apache Kafka in a development setup, create the following docker-compose.yml file:

version: "3"
services:
  kafka:
    image: 'bitnami/kafka:latest'
    ports:
      - '9092:9092'
    environment:
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093
      - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@kafka:9093
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER

To deploy it, run the following command in the directory where the docker-compose.yml file is located:

docker-compose up -d
Kafka with Zookeeper

Apache Kafka Raft (KRaft) makes use of a new quorum controller service in Kafka which replaces the previous controller and makes use of an event-based variant of the Raft consensus protocol. This greatly simplifies Kafka’s architecture by consolidating responsibility for metadata into Kafka itself, rather than splitting it between two different systems: ZooKeeper and Kafka.

More Info can be found here: https://developer.confluent.io/learn/kraft/

NOTE: According to KIP-833, KRaft is now in a production-ready state.

However, if you want to keep using ZooKeeper, you can use the following configuration:

version: "2"

services:
  zookeeper:
    image: docker.io/bitnami/zookeeper:3.9
    ports:
      - "2181:2181"
    volumes:
      - "zookeeper_data:/bitnami"
    environment:
      - ALLOW_ANONYMOUS_LOGIN=yes
  kafka:
    image: docker.io/bitnami/kafka:3.4
    ports:
      - "9092:9092"
    volumes:
      - "kafka_data:/bitnami"
    environment:
      - KAFKA_CFG_ZOOKEEPER_CONNECT=zookeeper:2181
    depends_on:
      - zookeeper

volumes:
  zookeeper_data:
    driver: local
  kafka_data:
    driver: local
Accessing Apache Kafka with internal and external clients

In order to use internal and external clients to access Apache Kafka brokers you need to configure one listener for each kind of client.

To do so, add the following environment variables to your docker-compose:

    environment:
      - KAFKA_CFG_NODE_ID=0
      - KAFKA_CFG_PROCESS_ROLES=controller,broker
      - KAFKA_CFG_CONTROLLER_QUORUM_VOTERS=0@<your_host>:9093
+     - KAFKA_CFG_LISTENERS=PLAINTEXT://:9092,CONTROLLER://:9093,EXTERNAL://:9094
+     - KAFKA_CFG_ADVERTISED_LISTENERS=PLAINTEXT://kafka:9092,EXTERNAL://localhost:9094
+     - KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:PLAINTEXT,EXTERNAL:PLAINTEXT,PLAINTEXT:PLAINTEXT
      - KAFKA_CFG_CONTROLLER_LISTENER_NAMES=CONTROLLER

And expose the external port:

(the internal, client one can still be used within the docker network)

    ports:
-     - '9092:9092'
+     - '9094:9094'

Note: To connect from an external machine, change localhost above to your host's external IP/hostname and include EXTERNAL://0.0.0.0:9094 in KAFKA_CFG_LISTENERS to allow for remote connections.

Producer and consumer using external client

These clients, from the same host, will use localhost to connect to Apache Kafka.

kafka-console-producer.sh --producer.config /opt/bitnami/kafka/config/producer.properties --bootstrap-server 127.0.0.1:9094 --topic test
kafka-console-consumer.sh --consumer.config /opt/bitnami/kafka/config/consumer.properties --bootstrap-server 127.0.0.1:9094 --topic test --from-beginning

If running these commands from another machine, change the address accordingly.

Producer and consumer using internal client

These clients, from other containers on the same Docker network, will use the kafka container service hostname to connect to Apache Kafka.

kafka-console-producer.sh --producer.config /opt/bitnami/kafka/config/producer.properties --bootstrap-server kafka:9092 --topic test
kafka-console-consumer.sh --consumer.config /opt/bitnami/kafka/config/consumer.properties --bootstrap-server kafka:9092 --topic test --from-beginning

Similarly, application code will need to use bootstrap.servers=kafka:9092

More info about Apache Kafka listeners can be found in this great article

Security

In order to configure authentication, you must configure the Apache Kafka listeners properly. Let's see an example to configure Apache Kafka with SASL_SSL authentication for communications with clients, and SASL authentication for controller-related communications.

The environment variables below should be defined to configure the listeners, and the SASL credentials for client communications:

KAFKA_CFG_LISTENERS=SASL_SSL://:9092,CONTROLLER://:9093
KAFKA_CFG_ADVERTISED_LISTENERS=SASL_SSL://localhost:9092
KAFKA_CLIENT_USERS=user
KAFKA_CLIENT_PASSWORDS=password
KAFKA_CLIENT_LISTENER_NAME=SASL_SSL
KAFKA_CFG_LISTENER_SECURITY_PROTOCOL_MAP=CONTROLLER:SASL_PLAINTEXT,SASL_SSL:SASL_SSL
KAFKA_CFG_SASL_MECHANISM_CONTROLLER_PROTOCOL=PLAIN
KAFKA_CONTROLLER_USER=controller_user
KAFKA_CONTROLLER_PASSWORD=controller_password

You must also use your own certificates for SSL. You can drop your Java Key Stores or PEM files into /opt/bitnami/kafka/config/certs. If the JKS or PEM certs are password protected (recommended), you will need to provide it to get access to the keystores:

KAFKA_CERTIFICATE_PASSWORD=myCertificatePassword

If the truststore is mounted in a different location than /opt/bitnami/kafka/config/certs/kafka.truststore.jks, /opt/bitnami/kafka/config/certs/kafka.truststore.pem, /bitnami/kafka/config/certs/kafka.truststore.jks or /bitnami/kafka/config/certs/kafka.truststore.pem, set the KAFKA_TLS_TRUSTSTORE_FILE variable.

The following script can help you with the creation of the JKS and certificates:

Keep in mind the following notes:

  • When prompted to enter a password, use the same one for all.
  • Set the Common Name or FQDN values to your Apache Kafka container hostname, e.g. kafka.example.com. After entering this value, when prompted "What is your first and last name?", enter this value as well.
    • As an alternative, you can disable host name verification setting the environment variable KAFKA_CFG_SSL_ENDPOINT_IDENTIFICATION_ALGORITHM to an empty string.
  • When setting up a Apache Kafka Cluster (check the "Setting up an Apache Kafka Cluster") for more information), each Apache Kafka broker and logical client needs its own keystore. You will have to repeat the process for each of the brokers in the cluster.

The following docker-compose file is an example showing how to mount your JKS certificates protected by the password certificatePassword123. Additionally it

Note: the README for this image is longer than the DockerHub length limit of 25000, so has been trimmed. The full README can be found at https://github.com/bitnami/containers/blob/main/bitnami/kafka/README.md

Docker Pull Command Docker 拉取命令

docker pull bitnami/kafka