鲁春利的工作笔记,谁说程序员不能有文艺范?
Kafka主要的shell脚本有
[hadoop@nnode kafka0.8.2.1]$ ll
总计 80
-rwxr-xr-x 1 hadoop hadoop 943 2015-02-27 kafka-console-consumer.sh
-rwxr-xr-x 1 hadoop hadoop 942 2015-02-27 kafka-console-producer.sh
-rwxr-xr-x 1 hadoop hadoop 870 2015-02-27 kafka-consumer-offset-checker.sh
-rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-consumer-perf-test.sh
-rwxr-xr-x 1 hadoop hadoop 860 2015-02-27 kafka-mirror-maker.sh
-rwxr-xr-x 1 hadoop hadoop 884 2015-02-27 kafka-preferred-replica-election.sh
-rwxr-xr-x 1 hadoop hadoop 946 2015-02-27 kafka-producer-perf-test.sh
-rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-reassign-partitions.sh
-rwxr-xr-x 1 hadoop hadoop 866 2015-02-27 kafka-replay-log-producer.sh
-rwxr-xr-x 1 hadoop hadoop 872 2015-02-27 kafka-replica-verification.sh
-rwxr-xr-x 1 hadoop hadoop 4185 2015-02-27 kafka-run-class.sh
-rwxr-xr-x 1 hadoop hadoop 1333 2015-02-27 kafka-server-start.sh
-rwxr-xr-x 1 hadoop hadoop 891 2015-02-27 kafka-server-stop.sh
-rwxr-xr-x 1 hadoop hadoop 868 2015-02-27 kafka-simple-consumer-shell.sh
-rwxr-xr-x 1 hadoop hadoop 861 2015-02-27 kafka-topics.sh
drwxr-xr-x 2 hadoop hadoop 4096 2015-02-27 windows
-rwxr-xr-x 1 hadoop hadoop 1370 2015-02-27 zookeeper-server-start.sh
-rwxr-xr-x 1 hadoop hadoop 875 2015-02-27 zookeeper-server-stop.sh
-rwxr-xr-x 1 hadoop hadoop 968 2015-02-27 zookeeper-shell.sh
[hadoop@nnode kafka0.8.2.1]$
说明:Kafka也提供了在windows下运行的bat脚本,在bin/windows目录下。
ZooKeeper脚本
Kafka各组件均依赖于ZooKeeper环境,因此在使用Kafka之前首先需要具备ZooKeeper环境;可以配置ZooKeeper集群,也可以使用Kafka集成的ZooKeeper脚本来启动一个standalone mode的ZooKeeper节点。
# 启动Zookeeper Server
[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-start.sh
USAGE: bin/zookeeper-server-start.sh zookeeper.properties
# 配置文件路径为config/zookeeper.properties,主要配置zookeeper的本地存储路径(dataDir)
# 内部实现为调用
exec $base_dir/kafka-run-class.sh
$EXTRA_ARGS org.apache.zookeeper.server.quorum.QuorumPeerMain $@
# 停止ZooKeeper Server
[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-server-stop.sh
# 内部实现为调用
ps ax | grep -i 'zookeeper' | grep -v grep | awk '{print $1}' | xargs kill -SIGINT
# 设置服务器参数
[hadoop@nnode kafka0.8.2.1]$ zookeeper-shell.sh
USAGE: bin/zookeeper-shell.sh zookeeper_host:port[/path] [args...]
# 内部实现为调用
exec $(dirname $0)/kafka-run-class.sh org.apache.zookeeper.ZooKeeperMain -server "$@"
# zookeeper shell用来查看zookeeper的节点信息
[hadoop@nnode kafka0.8.2.1]$ bin/zookeeper-shell.sh nnode:2181,dnode1:2181,dnode2:2181/
Connecting to nnode:2181,dnode1:2181,dnode2:2181/
Welcome to ZooKeeper!
JLine support is disabled
WATCHER::
WatchedEvent state:SyncConnected type:None path:null
ls /
[hbase, hadoop-ha, admin, zookeeper, consumers, config, zk-book, brokers, controller_epoch]
说明:$@ 表示所有参数列表。 $# 添加到Shell的参数个数。
Kafka启动与停止
# 启动Kafka Server
[hadoop@nnode kafka0.8.2.1]$ bin/kafka-server-start.sh
USAGE: bin/kafka-server-start.sh [-daemon] server.properties
# 内部实现为调用
exec $base_dir/kafka-run-class.sh $EXTRA_ARGS kafka.Kafka $@
# 略
[hadoop@nnode kafka0.8.2.1]$ bin/kafka-run-class.sh
# 停止Kafka Server
[hadoop@nnode kafka0.8.2.1]$ kafka-server-stop.sh
# 内部实现为调用
ps ax | grep -i 'kafka\.Kafka' | grep java | grep -v grep | awk '{print $1}' | xargs kill -SIGTERM
说明:Kafka启动时会从config/server.properties读取配置信息,其中Kafka Server启动的三个核心配置项为:
broker.id : broker的唯一标识符,取值为非负整数(可以取ip的最后一组)
port : server监听客户端连接的端口(默认为9092)
zookeeper.connect : ZK的连接信息,格式为hostname1:port1[,hostname2:port2,hostname3:port3]
# 可选
log.dirs : Kafka数据存储的路径(默认为/tmp/kafka-logs),以逗号分割的一个或多个目录列表。
当有一个新partition被创建时,此时哪个目录中partition数目最少,则新创建的partition会被放
置到该目录。
num.partitions : Topic的partition数目(默认为1),可以在创建Topic时指定
# 其他参考http://kafka.apache.org/documentation.html#configuration
Kafka消息
# 消息生产者
[hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-producer.sh
Read data from standard input and publish it to Kafka. # 从控制台读取数据
Option Description
------ -----------
--topic <topic> REQUIRED: The broker list string in the form HOST1:PORT1,HOST2:PORT2.
--broker-list <broker-list> REQUIRED: The topic id to produce messages to.
# 这两个为必选参数,其他的可选参数可以通过直接执行该命令查看帮助
# 消息消费者
[hadoop@nnode kafka0.8.2.1]$ bin/kafka-console-consumer.sh
The console consumer is a tool that reads data from Kafka and outputs it to standard output.
Option Description
------ -----------
--zookeeper <urls> REQUIRED: The connection string for the zookeeper connection,
in the form host:port.(Multiple URLS can be given to allow fail-over.)
--topic <topic> The topic id to consume on.
--from-beginning If the consumer does not already have an established offset to
consume from, start with the earliest message present in the
log rather than the latest message.
# zookeeper参数是必须的,其他的都是可选的,具体的参考帮助信息
# 查看消息信息
[hadoop@nnode kafka0.8.2.1]$ bin/kafka-topics.sh
Create, delete, describe, or change a topic.
Option Description
------ -----------
--zookeeper <urls> REQUIRED: The connection string for the zookeeper connection,
in the form host:port. (Multiple URLS can be given to allow fail-over.)
--create Create a new topic.
--delete Delete a topic
--alter Alter the configuration for the topic.
--list List all available topics.
--describe List details for the given topics.
--topic <topic> The topic to be create, alter or describe. Can also accept
a regular expression except for --create option。
--help Print usage information.
# zookeeper参数是必须的,其他的都是可选的,具体的参考帮助信息
其余脚本暂略
亿速云「云服务器」,即开即用、新一代英特尔至强铂金CPU、三副本存储NVMe SSD云盘,价格低至29元/月。点击查看>>
免责声明:本站发布的内容(图片、视频和文字)以原创、转载和分享为主,文章观点不代表本网站立场,如果涉及侵权请联系站长邮箱:is@yisu.com进行举报,并提供相关证据,一经查实,将立刻删除涉嫌侵权内容。