kafka入门

xiaoxiao2021-02-28  13

1 安装  https://www.jianshu.com/p/2b209d74139c

2 详细教程 https://blog.csdn.net/csolo/article/details/52389646

3 官网 https://kafka.apache.org/quickstart

cd /usr/local/Cellar/kafka/1.1.0

1.安装kafka

执行命令:brew install kafka

2.修改server.properties

执行命令:vi /usr/local/etc/kafka/server.properties增加一行配置如下:#listeners = PLAINTEXT://your.host.name:9092listeners=PLAINTEXT://localhost:9092

3.启动zookeeper

执行命令: zkServer start

4. 以server.properties的配置,启动kafka

在kafka的bin目录下:执行命令:./kafka-server-start /usr/local/etc/kafka/server.properties

5.新建session,查看kafka的topic

在kafka的bin目录下:执行命令:./kafka-topics --list --zookeeper localhost:2181

6.启动kafka生产者

在kafka的bin目录下:

执行命令:./kafka-console-producer --topic [topic-name] --broker-listlocalhost:9092(第2步修改的listeners)
7.启动kafka消费者

在kafka的bin目录下:执行命令:./kafka-console-consumer --bootstrap-serverlocalhost:9092 —topic [topic-name]

Step 2: Start the server

Kafka uses ZooKeeper so you need to first start a ZooKeeper server if you don't already have one. You can use the convenience script packaged with kafka to get a quick-and-dirty single-node ZooKeeper instance.

1 2 3 > bin /zookeeper-server-start .sh config /zookeeper .properties [2013-04-22 15:01:37,495] INFO Reading configuration from: config /zookeeper .properties (org.apache.zookeeper.server.quorum.QuorumPeerConfig) ...

Now start the Kafka server:

1 2 3 4 > bin /kafka-server-start .sh config /server .properties [2013-04-22 15:01:47,028] INFO Verifying properties (kafka.utils.VerifiableProperties) [2013-04-22 15:01:47,051] INFO Property socket.send.buffer.bytes is overridden to 1048576 (kafka.utils.VerifiableProperties) ...

Step 3: Create a topic

Let's create a topic named "test" with a single partition and only one replica:

1 > bin /kafka-topics .sh --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test

We can now see that topic if we run the list topic command:

1 2 > bin /kafka-topics .sh --list --zookeeper localhost:2181 test

Alternatively, instead of manually creating topics you can also configure your brokers to auto-create topics when a non-existent topic is published to.

Step 4: Send some messages

Kafka comes with a command line client that will take input from a file or from standard input and send it out as messages to the Kafka cluster. By default, each line will be sent as a separate message.

Run the producer and then type a few messages into the console to send to the server.

1 2 3 > bin /kafka-console-producer .sh --broker-list localhost:9092 --topic test This is a message This is another message

Step 5: Start a consumer

Kafka also has a command line consumer that will dump out messages to standard output.

1 2 3 > bin /kafka-console-consumer .sh --bootstrap-server localhost:9092 --topic test --from-beginning This is a message This is another message
转载请注明原文地址: https://www.6miu.com/read-2300381.html

最新回复(0)