Introduction This is the fourth blog in our four-part blog series on how to achieve resource isolation in Apache Pulsar. Before we dive in, let’s review what was covered in Parts I, II, and III.
Pulsar Isolation Part I: Taking an In-Depth Look at How to Achieve Isolation in Pulsar This blog provides an introduction to three approaches to implement isolation in Pulsar. These include:leveraging separate Pulsar clusters that use separate BookKeeper clusters, leveraging separate Pulsar clusters that share one BookKeeper cluster, and using a single Pulsar cluster with a single BookKeeper cluster. Each of these approaches and their specific use cases are discussed at length in the subsequent blogs. Pulsar Isolation Part II: Separate Pulsar Clusters shows you how to achieve isolation between separate Pulsar clusters that use separate BookKeeper clusters. This shared-nothing approach offers the highest level of isolation and is suitable for storing highly sensitive data, such as personally identifiable information or financial records.Pulsar Isolation Part III: Separate Pulsar Clusters Sharing a Single BookKeeper Cluster demonstrates how to achieve Pulsar isolation using separate Pulsar clusters that share one BookKeeper cluster. This approach uses separate Pulsar broker clusters in order to isolate the end-users from one another and allows you to use different authentication methods based on the use case. As a result, you gain the benefits of using a shared storage layer, such as a reduced hardware footprint and the associated hardware and maintenance costs.In this fourth and final blog of the series, we provide a step-by-step tutorial on how to use a single cluster to achieve broker and bookie isolation. This more traditional approach takes advantage of Pulsar’s built-in multi-tenancy and removes the need to manage multiple broker and bookie clusters.
Preparation In this tutorial we use the docker-compose to establish a Pulsar cluster. First, we need to install the docker environment .
This tutorial is based on docker 20.10.10, docker-compose 1.29.2, and MacOS 12.3.1. Get the docker-compose configuration files. <script>
git clone https://github.com/gaoran10/pulsar-docker-compose
cd pulsar-docker-compose
<script>
Start the cluster. <script>
docker-compose up
<script>
Check the pods. <script>
docker-compose ps
Name Command State Ports
--------------------------------------------------------------------------------------------------------
bk1 bash -c export dbStorage_w ... Up
bk2 bash -c export dbStorage_w ... Up
bk3 bash -c export dbStorage_w ... Up
bk4 bash -c export dbStorage_w ... Up
broker1 bash -c bin/apply-config-f ... Up
broker2 bash -c bin/apply-config-f ... Up
broker3 bash -c bin/apply-config-f ... Up
proxy1 bash -c bin/apply-config-f ... Up 0.0.0.0:6650->6650/tcp, 0.0.0.0:8080->8080/tcp
pulsar-init bin/init-cluster.sh Exit 0
zk1 bash -c bin/apply-config-f ... Up
<script>
After the cluster initiation completes, we can begin setting the broker isolation policy.
Broker Isolation Download a Pulsar release package to execute the pulsar-admin command. <script>
wget https://archive.apache.org/dist/pulsar/pulsar-2.10.0/apache-pulsar-2.10.0-bin.tar.gz
tar -txvf apache-pulsar-2.10.0-bin.tar.gz
// we can execute pulsar-admin command in this directory
cd apache-pulsar-2.10.0
<script>
Get the broker list.
Create a namespace. <script>
bin/pulsar-admin namespaces create public/ns-isolation
bin/pulsar-admin namespaces set-retention -s 1G -t 3d public/ns-isolation
<script>
Set the namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
<script>
Get the namespace isolation policies. <script>
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
<script>
Create a partitioned topic. <script>
bin/pulsar-admin topics create-partitioned-topic -p 10 public/ns-isolation/t1
<script>
Do a partitioned lookup. <script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
<script>
Stop broker1. <script>
${DOCKER_COMPOSE_HOME}/docker-compose stop broker1
# output
Stopping broker1 ... done
<script>
Check the partitioned lookup. After broker1 stop, the topics will be owned by secondary broker broker2:*.
<script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker2:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker2:6650
<script>
Stop broker2. <script>
${DOCKER_COMPOSE_HOME}/docker-compose stop broker2
# output
Stopping broker2 ... done
<script>
Check the partitioned lookup. After stopping broker2, there are no available brokers for namespace public/ns-isolation-broker.
<script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
HTTP 503 Service Unavailable
Reason: javax.ws.rs.ServiceUnavailableException: HTTP 503 Service Unavailable
<script>
Restart broker1 and broker2. <script>
${DOCKER_COMPOSE_HOME}/docker-compose start broker1
# output
Starting broker1 ... done
${DOCKER_COMPOSE_HOME}/docker-compose start broker2
# output
Starting broker2 ... done
<script>
Migrate the Namespace between Brokers Because the Pulsar broker is stateless, we can migrate the namespace between broker groups by simply changing the namespace isolation policy.
Check the namespace isolation policies. <script>
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
<script>
We could find that the primary and secondary brokers of the namespace public/ns-isolation are broker1:* and broker2:*.
Check the topic partitioned lookup results. <script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
<script>
Modify a new namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker3:*" \
--secondary "broker2:*" \
test ns-broker-isolation
<script>
Check the namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker3:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
<script>
Unload the namespace to make the namespace isolation policy take effect. <script>
bin/pulsar-admin namespaces unload public/ns-isolation
<script>
Check the partitioned lookup. We could find that topics are already owned by the primary broker(broker3).
<script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker3:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker3:6650
<script>
Scale up and down Brokers Scale up Start broker4. Add broker4 configurations in the docker-compose file.
<script>
broker4:
hostname: broker4
container_name: broker4
image: apachepulsar/pulsar:latest
restart: on-failure
command: >
bash -c "bin/apply-config-from-env.py conf/broker.conf && \
bin/apply-config-from-env.py conf/pulsar_env.sh && \
bin/watch-znode.py -z $$zookeeperServers -p /initialized-$$clusterName -w && \
exec bin/pulsar broker"
environment:
clusterName: test
zookeeperServers: zk1:2181
configurationStore: zk1:2181
webSocketServiceEnabled: "false"
functionsWorkerEnabled: "false"
managedLedgerMaxEntriesPerLedger: 100
managedLedgerMinLedgerRolloverTimeMinutes: 0
volumes:
- ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
depends_on:
- zk1
- pulsar-init
- bk1
- bk2
- bk3
- bk4
networks:
pulsar:
<script>
Start broker4.
<script>
${DOCKER_COMPOSE_HOME}/docker-compose create
# output
zk1 is up-to-date
bk1 is up-to-date
bk2 is up-to-date
bk3 is up-to-date
broker1 is up-to-date
broker2 is up-to-date
broker3 is up-to-date
Creating broker4 ... done
proxy1 is up-to-date
<script>
<script>
${DOCKER_COMPOSE_HOME}/docker-compose start broker4
# output
Starting broker4 ... done
<script>
Check the broker list. <script>
bin/pulsar-admin brokers list test
# output
broker4:8080
broker1:8080
broker2:8080
broker3:8080
<script>
Set a namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*,broker4:*" \
--secondary "broker2:*" \
test ns-broker-isolation
<script>
Get the namespace isolation policies. <script>
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*, broker4:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
<script>
Unload the namespace. <script>
bin/pulsar-admin namespaces unload public/ns-isolation
<script>
Check the partitioned lookup. The topic should be owned by broker1 and broker4.
<script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker4:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
<script>
Scale down Remove broker4 from the namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy set \
--auto-failover-policy-type min_available \
--auto-failover-policy-params min_limit=1,usage_threshold=80 \
--namespaces public/ns-isolation \
--primary "broker1:*" \
--secondary "broker2:*" \
test ns-broker-isolation
<script>
Check the namespace isolation policy. <script>
bin/pulsar-admin ns-isolation-policy list test
# output
ns-broker-isolation NamespaceIsolationDataImpl(namespaces=[public/ns-isolation], primary=[broker1:*], secondary=[broker2:*], autoFailoverPolicy=AutoFailoverPolicyDataImpl(policyType=min_available, parameters={min_limit=1, usage_threshold=80}))
<script>
Stop broker4. <script>
${DOCKER_COMPOSE_HOME}/docker-compose stop broker4
# output
Stopping broker4 ... done
<script>
Check the broker list. <script>
bin/pulsar-admin brokers list test
# output
broker1:8080
broker2:8080
broker3:8080
<script>
Check the partitioned lookup. <script>
bin/pulsar-admin topics partitioned-lookup public/ns-isolation/t1
# output
persistent://public/ns-isolation/t1-partition-0 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-1 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-2 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-3 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-4 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-5 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-6 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-7 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-8 pulsar://broker1:6650
persistent://public/ns-isolation/t1-partition-9 pulsar://broker1:6650
<script>
BookKeeper Isolation Get the bookie list. <script>
bin/pulsar-admin bookies list-bookies
# output
{
"bookies" : [ {
"bookieId" : "bk2:3181"
}, {
"bookieId" : "bk4:3181"
}, {
"bookieId" : "bk3:3181"
}, {
"bookieId" : "bk1:3181"
} ]
}
<script>
Set the bookie rack. The default value of the configuration bookkeeperClientRackawarePolicyEnabled is true, so the RackawareEnsemblePlacementPolicy is the default bookie isolation policy, we'll set the rack name like this /rack.
<script>
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk1:3181 \
--hostname bk1:3181 \
--group group1 \
--rack /rack1
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk3:3181 \
--hostname bk3:3181 \
--group group1 \
--rack /rack1
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk2:3181 \
--hostname bk2:3181 \
--group group2 \
--rack /rack2
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk4:3181 \
--hostname bk4:3181 \
--group group2 \
--rack /rack2
<script>
Check the bookie racks placement. <script>
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181)}
<script>
Set the bookie affinity group for the namespace. <script>
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group1 \
--secondary-group group2
<script>
Check the namespace affinity group. <script>
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group1",
"bookkeeperAffinityGroupSecondary" : "group2"
}
<script>
Produce messages to the topic. <script>
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
<script>
Get internal stats of the topic. <script>
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 0,
"ledgerId" : 1,
"ledgerId" : 2,
"ledgerId" : 3,
"ledgerId" : 4,
"ledgerId" : -1,
<script>
Check ledger ensembles for the ledgers [0, 1, 2, 3, 4]. <script>
# execute these commands in the node bk1
${DOCKER_COMPOSE_HOME}/docker-compose exec bk1 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 0
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 1
# check ensembles
ensembles={0=[bk3:3181, bk1:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 2
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 3
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 4
# check ensembles
ensembles={0=[bk1:3181, bk3:3181]}
<script>
Stop bookie1. <script>
${DOCKER_COMPOSE_HOME}/docker-compose stop bk1
<script>
Produce messages to the topic. <script>
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
<script>
Check ledger metadata. <script>
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 5,
"ledgerId" : 6,
"ledgerId" : 7,
"ledgerId" : 8,
"ledgerId" : 9,
"ledgerId" : -1,
<script>
Check ledger metadata for the newly added ledgers [5,6,7,8,9]. Because bookie1 is not usable and the configuration bookkeeperClientEnforceMinNumRacksPerWriteQuorum is false, we should find that the secondary bookies are used. Bookie3 is in the primary group so bookie3 is always used.
<script>
# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 5
# check ensembles
ensembles={0=[bk4:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 6
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 7
# check ensembles
ensembles={0=[bk2:3181, bk3:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 8
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 9
# check ensembles
ensembles={0=[bk3:3181, bk2:3181]}
<script>
Restart bk1
<script>
${DOCKER_COMPOSE_HOME}/docker-compose start bk1
<script>
Migrate Bookie Affinity Group Check the bookie affinity group. <script>
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group1",
"bookkeeperAffinityGroupSecondary" : "group2"
}
<script>
Modify the bookie affinity group of the namespace. <script>
bin/pulsar-admin namespaces set-bookie-affinity-group public/ns-isolation \
--primary-group group2
<script>
Check the bookie affinity group. <script>
bin/pulsar-admin namespaces get-bookie-affinity-group public/ns-isolation
{
"bookkeeperAffinityGroupPrimary" : "group2"
}
<script>
Unload the namespace. <script>
bin/pulsar-admin namespaces unload public/ns-isolation
<script>
Produce messages. <script>
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
<script>
Check the ensemble's bookies for newly created ledgers. <script>
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 12,
"ledgerId" : 13,
"ledgerId" : 14,
"ledgerId" : 15,
"ledgerId" : 16,
"ledgerId" : -1,
<script>
Check ledger metadata for the newly added ledgers [12, 13, 14, 15, 16]. <script>
# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 12
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 13
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 14
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 15
# check ensembles
ensembles={0=[bk4:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 16
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}
<script>
Scale up and down Bookies Scale up Add the following configuration in the docker-compose file. <script>
bk5:
hostname: bk5
container_name: bk5
image: apachepulsar/pulsar:latest
command: >
bash -c "export dbStorage_writeCacheMaxSizeMb="${dbStorage_writeCacheMaxSizeMb:-16}" && \
export dbStorage_readAheadCacheMaxSizeMb="${dbStorage_readAheadCacheMaxSizeMb:-16}" && \
bin/apply-config-from-env.py conf/bookkeeper.conf && \
bin/apply-config-from-env.py conf/pulsar_env.sh && \
bin/watch-znode.py -z $$zkServers -p /initialized-$$clusterName -w && \
exec bin/pulsar bookie"
environment:
clusterName: test
zkServers: zk1:2181
numAddWorkerThreads: 8
useHostNameAsBookieID: "true"
volumes:
- ./apply-config-from-env.py:/pulsar/bin/apply-config-from-env.py
depends_on:
- zk1
- pulsar-init
networks:
pulsar:
<script>
Start bookie5. <script>
${DOCKER_COMPOSE_HOME}/docker-compose create
${DOCKER_COMPOSE_HOME}/docker-compose start bk5
<script>
Check the readable and writable bookie list. Because bookie1 is stopped, there should be 4 bookies. <script>
# execute this command in bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
<script>
<script>
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.32.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.32.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.32.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.32.4, Port:3181, Hostname:bk1
BookieID:bk5:3181, IP:192.168.32.9, Port:3181, Hostname:bk5
<script>
Add the newly added bookie node to the primary group. <script>
bin/pulsar-admin bookies set-bookie-rack \
--bookie bk5:3181 \
--hostname bk5:3181 \
--group group2 \
--rack /rack2
<script>
Check the bookie racks placement. <script>
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
<script>
Unload the namespace. <script>
bin/pulsar-admin namespaces unload public/ns-isolation
<script>
Produce messages to a new topic. <script>
bin/pulsar-client produce -m 'hello' -n 500 public/ns-isolation/t2
<script>
Check the newly added ledger of the topic. <script>
bin/pulsar-admin topics stats-internal public/ns-isolation/t2 | grep ledgerId | tail -n 6
"ledgerId" : 17,
"ledgerId" : 20,
"ledgerId" : 21,
"ledgerId" : 22,
"ledgerId" : 23,
"ledgerId" : -1,
<script>
Verify ledger ensembles, we could find that the new created ledgers are all wrote to primary group, because there are enough rw bookies.
<script>
# execute these commands in the node bk1
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 /bin/bash
bin/bookkeeper shell ledgermetadata -ledgerid 17
# check ensembles
ensembles={0=[bk5:3181, bk2:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 20
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 21
# check ensembles
ensembles={0=[bk5:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 22
# check ensembles
ensembles={0=[bk5:3181, bk4:3181]}
bin/bookkeeper shell ledgermetadata -ledgerid 23
# check ensembles
ensembles={0=[bk2:3181, bk4:3181]}
<script>
Scale down Check the placement of the racks. <script>
bin/pulsar-admin bookies racks-placement
group1 {bk1:3181=BookieInfoImpl(rack=/rack1, hostname=bk1:3181), bk3:3181=BookieInfoImpl(rack=/rack1, hostname=bk3:3181)}
group2 {bk2:3181=BookieInfoImpl(rack=/rack2, hostname=bk2:3181), bk4:3181=BookieInfoImpl(rack=/rack2, hostname=bk4:3181), bk5:3181=BookieInfoImpl(rack=/rack2, hostname=bk5:3181)}
<script>
Delete the bookie from the affinity bookie group. <script>
bin/pulsar-admin bookies delete-bookie-rack -b bk5:3181
<script>
Check if there are under-replicated ledgers, which should be expected because we deleted a bookie. <script>
# execute these commands in the node bk2
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listunderreplicated
<script>
Stop the bookie. <script>
${DOCKER_COMPOSE_HOME}/docker-compose stop bk5
<script>
Decommission the bookie. <script>
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell decommissionbookie -bookieid bk5:3181
<script>
Check ledgers in the decommissioned bookie. <script>
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listledgers -bookieid bk5:3181
<script>
List the bookies. <script>
${DOCKER_COMPOSE_HOME}/docker-compose exec bk2 bin/bookkeeper shell listbookies -rw
ReadWrite Bookies :
BookieID:bk2:3181, IP:192.168.48.5, Port:3181, Hostname:bk2
BookieID:bk4:3181, IP:192.168.48.7, Port:3181, Hostname:bk4
BookieID:bk3:3181, IP:192.168.48.6, Port:3181, Hostname:bk3
BookieID:bk1:3181, IP:192.168.48.4, Port:3181, Hostname:bk1
<script>
What’s Next Read the previous blogs in this series to learn more about Pulsar isolation :Pulsar Isolation Part I: Taking an In-Depth Look at How to Achieve Isolation in Pulsar Pulsar Isolation Part II: Separate Pulsar Clusters Pulsar Isolation Part III: Separate Pulsar Clusters Sharing a Single BookKeeper Cluster Learn Pulsar Fundamentals with StreamNative Academy : If you are new to Pulsar, we recommend taking the self-paced Pulsar courses developed by the original creators of Pulsar.Spin up a Pulsar cluster in minutes : Sign up for StreamNative Cloud today. StreamNative Cloud is the simple, fast, and cost-effective way to run Pulsar in the public cloud.Save your spot at the Pulsar Summit San Francisco : The first in-person Pulsar Summit is taking place this August! Sign up today to join the Pulsar community and the messaging and event streaming community.