Isolating clients with ZNodes
ZooKeeper is a dependency of many products supported by the Stackable Data Platform. To ensure that all products can use the same ZooKeeper cluster safely, it is important to isolate them which is done using ZNodes.
This guide shows you how to set up multiple ZNodes to use with different products from the Stackable Data Platform, using Kafka and Druid as an example. For an explanation of the ZNode concept, read the ZNodes concept page.
Prerequisites
To follow this guide, you should have
-
Access to a Kubernetes cluster
-
The Stackable Operator for Apache ZooKeeper installed in said cluster
-
A ZookeeperCluster already deployed on the cluster
If you have not yet set up the Operator and ZookeeperCluster, follow the getting started guide.
Steps
This guide assumes the ZookeeperCluster is called my-zookeeper
and is running in a data
namespace.
Setting up the ZNodes
To set up a Kafka and Druid instance to use the ZookeeperCluster, two ZNodes are required, one for each product.
This guide assumes the Kafka instance is running in the same namespace as the ZooKeeper, while the Druid instance is running in its own namespace called druid-ns
.
First, the Druid ZNode:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: druid-znode (1)
namespace: druid-ns (2)
spec:
clusterRef: (3)
name: my-zookeeper
namespace: data
1 | The name of the Druid ZNode. |
2 | The namespace where the ZNode should be created. This should be the same as the namespace of the product or client that wants to use the ZNode. |
3 | The ZooKeeper cluster reference. Since ZooKeeper is running in a different namespace, both the cluster name and namespace need to be given. |
And the Kafka ZNode:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: kafka-znode (1)
namespace: data (2)
spec:
clusterRef: (3)
name: my-zookeeper
1 | The name of the Kafka ZNode. |
2 | The namespace where the ZNode should be created. Since Kafka is running in the same namespace as ZooKeeper, this is the namespace of my-zookeeper . |
3 | The ZooKeeper cluster reference. The namespace is omitted here because the ZooKeeper is in the same namespace as the ZNode object. |
The Stackable Operator for ZooKeeper watches for ZookeeperZnode objects. If one is found it creates the ZNode inside the ZooKeeper cluster and also creates a discovery ConfigMap in the same namespace as the ZookeeperZnode with the same name as the ZookeeperZnode.
In this example, two ConfigMaps are created:
-
The Druid ZNode discovery ConfigMap
druid-znode
in thedruid-ns
namespace -
The Kafka ZNode discovery ConfigMap
kafka-znode
in thedata
namespace
Connecting the products to the ZNodes
The ConfigMaps with the name and namespaces as given above will look similar to this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: ... (1)
namespace: ...
data:
ZOOKEEPER: pod-1:2181,pod-2:2181,pod-3:2181/{path} (2)
1 | Name and namespaces as specified above |
2 | $PATH will be a unique and unpredictable path that is generated by the operator |
This ConfigMap can then be mounted into other Pods and the ZOOKEEPER
key can be used to connect to the ZooKeeper instance and the correct ZNode.
All products that need a ZNode can be configured with a zookeeperConfigMapName
property.
As the name implies, this property references the discovery ConfigMap for the requested ZNode.
For Kafka:
---
apiVersion: kafka.stackable.tech/v1alpha1
kind: DruidCluster
metadata:
name: my-druid
namespace: druid-ns
spec:
zookeeperConfigMapName: kafka-znode
...
And for Druid:
---
apiVersion: kafka.stackable.tech/v1alpha1
kind: KafkaCluster
metadata:
name: my-kafka
namespace: data
spec:
zookeeperConfigMapName: kafka-znode
...
The Stackable Operators for Kafka and Druid use the discovery ConfigMaps to connect Kafka and Druid Pods with different ZNodes in a shared ZooKeeper cluster.
What’s next
You can find out more about the discovery ConfigMap Discovery Profiles and the ZNodes in the concepts documentation.
Restoring from backups
For security reasons, a unique ZNode path is generated every time the same ZookeeperZnode object is recreated, even if it has the same name.
If a ZookeeperZnode needs to be associated with an existing ZNode path, the field status.znodePath
can be set to the desired path.
Note that since this is a subfield of status
, it must explicitly be updated on the status
subresource, and requires RBAC permissions to replace the zookeeperznodes/status
resource.
For example:
kubectl get zookeeperznode/test-znode -o json -n $NAMESPACE \
| jq '.status.znodePath = "/znode-override"' \
| kubectl replace -f- --subresource=status
The auto-generated ZNode will still be kept, and should be cleaned up by an administrator. |