kafka-security-saslscram-acl

1. 介绍

kafka安全主题的内容主要是分为认证和权限。

认证为了证明你是alice,而权限是决定了你能做什么事情,比如能不能些topic,能不能读topic等之类的事情。

http://kafka.apache.org/documentation/#security 里介绍了好多种类型。如果我们想要在线上开启认证和权限,我们需要考虑好多东西。比如能否动态增加权限,客户端接入操作是否足够简单等等。基于各种讨论后,我们目前决定使用sasl/scram + acl的方式。

此文章主要分为两块:一个是配置,另一个是操作命令。

2. 配置

2.1 说明

在broker和zookeeper之间的认证不支持org.apache.kafka.common.security.scram.ScramLoginModule,所以引入org.apache.zookeeper.server.auth.DigestLoginModule。详见如下链接:

https://cwiki.apache.org/confluence/display/ZOOKEEPER/Client-Server+mutual+authentication

如果不配置的话也可以,但是会在日志限制No JAAS Configure ‘Client’

1
[2019-02-24 16:03:15,121] WARN SASL configuration failed: javax.security.auth.login.LoginException: No JAAS configuration section named 'Client' was found in specified JAAS configuration file: '/Users/liubinbin/Documents/install/kafka_2.12-2.1.0/config/kafka_server_jaas.conf'. Will continue connection to Zookeeper server without SASL authentication, if Zookeeper server allows it. (org.apache.zookeeper.ClientCnxn)

如果配置之后启动broker会在日志里显示:

1
2
3
4
5
6
[2019-02-24 14:06:09,599] INFO Client successfully logged in. (org.apache.zookeeper.Login)
[2019-02-24 14:06:09,600] INFO Client will use DIGEST-MD5 as SASL mechanism. (org.apache.zookeeper.client.ZooKeeperSaslClient)
[2019-02-24 14:06:09,657] INFO Opening socket connection to server localhost/0:0:0:0:0:0:0:1:2181. Will attempt to SASL-authenticate using Login Context section 'Client' (org.apache.zookeeper.ClientCnxn)
[2019-02-24 14:06:09,676] INFO Socket connection established to localhost/0:0:0:0:0:0:0:1:2181, initiating session (org.apache.zookeeper.ClientCnxn)
[2019-02-24 14:06:09,795] INFO Session establishment complete on server localhost/0:0:0:0:0:0:0:1:2181, sessionid = 0x100043c21d70000, negotiated timeout = 6000 (org.apache.zookeeper.ClientCnxn)
[2019-02-24 14:06:09,799] INFO [ZooKeeperClient] Connected. (kafka.zookeeper.ZooKeeperClient)

2.2 broker

2.2.1 server.properties

listeners=SASL_PLAINTEXT://bin:9092

########### SASL/SCRAM ############################
security.inter.broker.protocol=SASL_PLAINTEXT
sasl.mechanism.inter.broker.protocol=SCRAM-SHA-256
sasl.enabled.mechanisms=SCRAM-SHA-256

########### ACL ############################
authorizer.class.name=kafka.security.auth.SimpleAclAuthorizer
allow.everyone.if.no.acl.found=true
super.users=User:Admin

######################### ZK ############################
zookeeper.set.acl=true

######################### ZK ############################
auto.create.topics.enable=false

2.2.2 kafka_server_jaas.conf

新增jaas文件,内容如下:

KafkaServer {
org.apache.kafka.common.security.scram.ScramLoginModule required
username=”admin”
password=”admin-secret”;
};

Client {
org.apache.zookeeper.server.auth.DigestLoginModule required
username=”admin”
password=”admin-secret”;
};

此配置通过在kafka-server-start.sh中添加如下命令,将jaas文件加入broker的jvm中。

export KAFKA_OPTS=”-Djava.security.auth.login.config=$KAFKA_HOME/config/kafka_server_jaas.conf”

2.3 zookeeper

2.3.1 kafka_zk_jaas.conf

新增jaas文件,内容如下:

Server {
org.apache.zookeeper.server.auth.DigestLoginModule required
user_admin=”admin-secret”;
};

需要和kafka_server_jaas.conf里Client对应。

此配置通过在zookeeper-server-start.sh中添加如下命令,将jaas文件加入zk的jvm中。

export KAFKA_OPTS=”-Djava.security.auth.login.config=$KAFKA_HOME/config/kafka_zk_jaas.conf”

2.3.2 zoo.cfg

添加如下配置:

authProvider.1=org.apache.zookeeper.server.auth.SASLAuthenticationProvider

requireClientAuthScheme=sasl

2.4 client

2.4.1 kafka_client_jaas.conf

新增jaas文件,内容如下:

KafkaClient {
org.apache.kafka.common.security.scram.ScramLoginModule required
username=”alice”
password=”alice-secret”;
};

此配置通过在kafka-console-consumer.sh和kafka-console-producer.sh中添加如下命令,将jaas文件加入zk的jvm中。

export KAFKA_OPTS=”-Djava.security.auth.login.config=/Users/liubinbin/Documents/install/kafka_2.12-2.1.0/config/kafka_client_jaas.conf”

2.4.2 saslscram-producer.properties

添加如下配置:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256

2.4.3 saslscram-consumer.properties

添加如下配置:

security.protocol=SASL_PLAINTEXT
sasl.mechanism=SCRAM-SHA-256

3. 命令

3.1 认证

3.1.1 添加用户

1
2
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[iterations=8192,password=alice-secret],SCRAM-SHA-512=[password=alice-secret]' --entity-type users --entity-name alice
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'SCRAM-SHA-256=[password=admin-secret],SCRAM-SHA-512=[password=admin-secret]' --entity-type users --entity-name admin

3.1.2 列出用户

1
bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users

3.1.3 查看用户

1
bin/kafka-configs.sh --zookeeper localhost:2181 --describe --entity-type users --entity-name alice

3.1.4 删除用户

1
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --delete-config 'SCRAM-SHA-512' --entity-type users --entity-name alice

3.2 权限

3.2.1 增加权限

1
2
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation Write --topic liubb
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --add --allow-principal User:alice --operation Read --topic liubb --group test-consumer-group

3.2.2 列出权限

1
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --list --topic liubb

3.2.3 删除权限

1
bin/kafka-acls.sh --authorizer-properties zookeeper.connect=localhost:2181 --remove --allow-principal User:alice --operation Read --operation Write --topic liubb

3.3 客户端

ps:localhost 在虚拟机内可能需要换成 hostname

3.3.1 生产者

1
bin/kafka-console-producer.sh --broker-list localhost:9092  --topic liubb --producer.config config/saslscram-producer.properties

3.3.2 消费者

1
bin/kafka-console-consumer.sh --bootstrap-server localhost:9092 --from-beginning --topic liubb --consumer.config config/saslscram-consumer.properties