First steps
Once you have followed the steps in the Installation section to install the operator and its dependencies, you will now deploy an HBase cluster and its dependencies. Afterwards you can verify that it works by creating tables and data in HBase using the REST API and Apache Phoenix (an SQL layer used to interact with HBase).
Setup
ZooKeeper
To deploy a ZooKeeper cluster create one file called zk.yaml
:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperCluster
metadata:
name: simple-zk
spec:
image:
productVersion: 3.9.2
servers:
roleGroups:
default:
replicas: 1
We also need to define a ZNode that will be used by the HDFS and HBase clusters to reference ZooKeeper.
Create another file called znode.yaml
and define a separate ZNode for each service:
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-hdfs-znode
spec:
clusterRef:
name: simple-zk
---
apiVersion: zookeeper.stackable.tech/v1alpha1
kind: ZookeeperZnode
metadata:
name: simple-hbase-znode
spec:
clusterRef:
name: simple-zk
Apply both of these files:
kubectl apply -f zk.yaml
kubectl apply -f znode.yaml
The state of the ZooKeeper cluster can be tracked with kubectl
:
kubectl rollout status --watch statefulset/simple-zk-server-default --timeout=300s
HDFS
An HDFS cluster has three components: the namenode
, the datanode
and the journalnode
.
Create a file named hdfs.yaml
defining 2 namenodes
and one datanode
and journalnode
each:
---
apiVersion: hdfs.stackable.tech/v1alpha1
kind: HdfsCluster
metadata:
name: simple-hdfs
spec:
image:
productVersion: 3.3.4
clusterConfig:
dfsReplication: 1
zookeeperConfigMapName: simple-hdfs-znode
nameNodes:
roleGroups:
default:
replicas: 2
dataNodes:
roleGroups:
default:
replicas: 1
journalNodes:
roleGroups:
default:
replicas: 1
Where:
-
metadata.name
contains the name of the HDFS cluster -
the HBase version in the Docker image provided by Stackable must be set in
spec.image.productVersion
Please note that the version you need to specify for spec.image.productVersion is the desired version of Apache HBase.
You can optionally specify the spec.image.stackableVersion to a certain release like 24.7.0 but it is recommended to leave it out and use the default provided by the operator.
For a list of available versions please check our image registry.
It should generally be safe to simply use the latest image version that is available.
|
Create the actual HDFS cluster by applying the file:
kubectl apply -f hdfs.yaml
Track the progress with kubectl
as this step may take a few minutes:
kubectl rollout status --watch statefulset/simple-hdfs-datanode-default --timeout=300s
kubectl rollout status --watch statefulset/simple-hdfs-namenode-default --timeout=300s
kubectl rollout status --watch statefulset/simple-hdfs-journalnode-default --timeout=300s
HBase
You can now create the HBase cluster.
Create a file called hbase.yaml
containing the following:
---
apiVersion: hbase.stackable.tech/v1alpha1
kind: HbaseCluster
metadata:
name: simple-hbase
spec:
image:
productVersion: 2.4.18
clusterConfig:
hdfsConfigMapName: simple-hdfs
zookeeperConfigMapName: simple-hbase-znode
masters:
roleGroups:
default:
replicas: 1
regionServers:
roleGroups:
default:
config:
resources:
cpu:
min: 300m
max: "3"
memory:
limit: 3Gi
replicas: 1
restServers:
roleGroups:
default:
replicas: 1
Verify that it works
To test the cluster you will use the REST API to check its version and status, and to create and inspect a new table. You will also use Phoenix to create, populate and query a second new table, before listing all non-system tables in HBase. These actions wil be carried out from one of the HBase components, the REST server.
First, check the cluster version with this callout:
kubectl exec -n default simple-hbase-restserver-default-0 -- \
curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/version/cluster"
This will return the version that was specified in the HBase cluster definition:
{"Version":"2.4.18"}
The cluster status can be checked and formatted like this:
kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/status/cluster" | json_pp
which will display cluster metadata that looks like this (only the first region is included for the sake of readability):
{
"DeadNodes" : [],
"LiveNodes" : [
{
"Region" : [
{
"currentCompactedKVs" : 0,
"memStoreSizeMB" : 0,
"name" : "U1lTVEVNLkNBVEFMT0csLDE2NjExNjA0NDM2NjcuYmYwMzA1YmM4ZjFmOGIwZWMwYjhmMGNjMWI5N2RmMmUu",
"readRequestsCount" : 104,
"rootIndexSizeKB" : 1,
"storefileIndexSizeKB" : 1,
"storefileSizeMB" : 1,
"storefiles" : 1,
"stores" : 1,
"totalCompactingKVs" : 0,
"totalStaticBloomSizeKB" : 0,
"totalStaticIndexSizeKB" : 1,
"writeRequestsCount" : 360
},
...
],
"heapSizeMB" : 351,
"maxHeapSizeMB" : 11978,
"name" : "simple-hbase-regionserver-default-0.simple-hbase-regionserver-default.default.svc.cluster.local:16020",
"requests" : 395,
"startCode" : 1661156787704
}
],
"averageLoad" : 43,
"regions" : 43,
"requests" : 1716
}
You can now create a table like this:
kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XPUT -H "Accept: text/xml" -H "Content-Type: text/xml" \
"http://simple-hbase-restserver-default:8080/users/schema" \
-d '<TableSchema name="users"><ColumnSchema name="cf" /></TableSchema>'
This will create a table users
with a single column family cf
.
Its creation can be verified by listing it:
kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/users/schema" | json_pp
{
"table" : [
{
"name" : "users"
}
]
}
An alternative way to interact with HBase is to use the Phoenix library that is pre-installed on the Stackable HBase image (in the /stackable/phoenix directory).
Use the Python utility psql.py
(found in /stackable/phoenix/bin) to create, populate and query a table called WEB_STAT
:
kubectl exec -n default simple-hbase-restserver-default-0 -- \
/stackable/phoenix/bin/psql.py \
/stackable/phoenix/examples/WEB_STAT.sql \
/stackable/phoenix/examples/WEB_STAT.csv \
/stackable/phoenix/examples/WEB_STAT_QUERIES.sql
The final command will display some grouped data like this:
HO TOTAL_ACTIVE_VISITORS
-- ----------------------------------------
EU 150
NA 1
Time: 0.017 sec(s)
Check the tables again with:
kubectl exec -n default simple-hbase-restserver-default-0 \
-- curl -s -XGET -H "Accept: application/json" "http://simple-hbase-restserver-default:8080/users/schema" | json_pp
This time the list includes not just users
(created above with the REST API) and WEB_STAT
, but several other tables too:
{
"table" : [
{
"name" : "SYSTEM.CATALOG"
},
{
"name" : "SYSTEM.CHILD_LINK"
},
{
"name" : "SYSTEM.FUNCTION"
},
{
"name" : "SYSTEM.LOG"
},
{
"name" : "SYSTEM.MUTEX"
},
{
"name" : "SYSTEM.SEQUENCE"
},
{
"name" : "SYSTEM.STATS"
},
{
"name" : "SYSTEM.TASK"
},
{
"name" : "WEB_STAT"
},
{
"name" : "users"
}
]
}
This is because Phoenix requires these SYSTEM.
tables for its own internal mapping mechanism, and they are created the first time that Phoenix is used on the cluster.
What’s next
Look at the Usage guide to find out more about configuring your HBase cluster.