July 30, 2025
8 min read

Pulsar Newbie Guide for Kafka Engineers (Part 1): Kafka → Pulsar CLI Cheatsheet

Neng Lu
Director of Platform, StreamNative
picture of Pengui Li in a city
Penghui Li
Director of Streaming, StreamNative & Apache Pulsar PMC Member
Hang Chen
Director of Storage, StreamNative & Apache Pulsar PMC Member

TL;DR

This post provides a quick cheatsheet mapping common Kafka CLI commands to Apache Pulsar. We’ll show how to create topics, list topics, produce and consume messages, and check metadata using Pulsar’s CLI tools. For each Kafka command, you’ll see the Pulsar equivalent (using pulsar-admin, pulsar-client, etc.) so you can hit the ground running with Pulsar. Bottom line: Pulsar’s CLI is just as powerful as Kafka’s, with a single unified tool for administration and simple clients for testing.

Introduction

If you’re familiar with Kafka’s command-line tools (like kafka-topics.sh, kafka-console-producer.sh, kafka-console-consumer.sh), you’ll be glad to know Pulsar offers similar capabilities. Pulsar’s main CLI tool is pulsar-admin, which lets you manage topics, tenants, namespaces, subscriptions and more. There’s also pulsar-client for producing/consuming messages and pulsar-perf for performance testing. This section will translate your Kafka CLI know-how to Pulsar commands.

Environment Assumption: We assume you have a Pulsar installation (or standalone) and have the bin directory in your PATH. In examples, we use bin/pulsar-admin and bin/pulsar-client as if running from the Pulsar install directory. Adjust accordingly for your setup.

Equivalents of Common Kafka Commands

Let’s go through typical tasks:

  • Listing Topics: In Kafka, you might run kafka-topics.sh --bootstrap-server localhost:9092 --list. In Pulsar, topics are scoped by namespace (more on that in Part 2). To list all topics in a namespace, use pulsar-admin topics list. For example, to list topics in the public/default namespace (the default namespace in a standalone cluster):

bin/pulsar-admin topics list public/default

This will output all topics under public/default. You can also list all namespaces in a tenant (bin/pulsar-admin namespaces list <tenant>) or all tenants (bin/pulsar-admin tenants list) similar to how Kafka has --list for topics and maybe uses separate tooling for multi-tenant setups (which Pulsar handles natively).

  • Creating a Topic: Kafka’s kafka-topics.sh --create ... lets you create a topic (and specify partitions). Pulsar can auto-create topics when producers send data, but you can also explicitly create them. Use pulsar-admin topics create. Pulsar topics have a persistent or non-persistent prefix. By default, use persistent topics. For example:

bin/pulsar-admin topics create "persistent://public/default/my-topic"

This creates a topic named “my-topic” in the public/default namespace. If you wanted a non-persistent topic (rare for newbies; non-persistent means messages not durably stored), you’d specify the non-persistent:// prefix. If the namespace or tenant doesn’t exist, Pulsar will throw an error (you should create the tenant/namespace first, see Part 2). By default, a new topic is non-partitioned (single partition in Kafka terms), but you can create a partitioned topic by adding -p <num> to specify number of partitions.

  • Creating a Partitioned Topic: In Kafka, the --partitions flag on create or the kafka-topics.sh --alter can set partitions. In Pulsar, a partitioned topic is a higher-level construct consisting of multiple internal topic partitions. To create one, add -p (partitions) flag:

bin/pulsar-admin topics create-partitioned-topic \

  --partitions 4 \

  persistent://public/default/my-partitioned-topic 

This will create 4 internal partitions (my-partitioned-topic-partition-0 to -3). In Pulsar’s view this is one logical topic with 4 partitions (comparable to a Kafka topic with 4 partitions).

  • Producing Messages (Console Producer): Kafka’s kafka-console-producer.sh allows sending test messages from the terminal. Pulsar provides pulsar-client for a similar purpose. For example, to send a message "Hello Pulsar":

bin/pulsar-client produce persistent://public/default/my-topic --messages "Hello Pulsar"

This will produce a message to my-topic. The CLI will print confirmation of send success. You can send multiple messages by separating with commas or using --messages multiple times.

  • Consuming Messages (Console Consumer): Kafka’s console consumer reads messages from a topic. Pulsar’s equivalent is also via pulsar-client. To consume messages:

bin/pulsar-client consume persistent://public/default/my-topic -s "my-subscription" -n 0 -p Earliest

This command will subscribe to my-topic with subscription name "my-subscription" and print messages to the console. The -n 0 means consume indefinitely (or you can specify a number of messages to consume). Pulsar requires a subscription name for consumers – think of it like a consumer group ID in Kafka (see Part 4 for details on subscriptions). If the subscription doesn’t exist, it will be created on the fly. You’ll start seeing any messages (including the ones produced above) printed out. Each message will be acknowledged as it’s consumed by default.

  • Viewing Topic Details and Stats: Kafka has tools like kafka-topics.sh --describe and kafka-consumer-groups.sh to show topic configs or consumer group offsets. Pulsar consolidates much of this in pulsar-admin commands. For example, to get topic statistics:

bin/pulsar-admin topics stats persistent://public/default/my-topic

This will output stats including the number of messages published, number of subscriptions, backlog (unacknowledged messages count per subscription), and other metrics. It’s similar to Kafka’s topic description and consumer lag combined – you can see who’s connected, backlog (like consumer lag), etc.

To see internal stats including storage size, use topics stats-internal. For partitioned topics, use topics stats <partitioned-topic> to get aggregated stats for all partitions.

  • Managing Subscriptions: Kafka uses consumer group commands to manage offsets (like resetting to earliest). Pulsar allows resetting a subscription cursor to a specific message or time. For example, to rewind a subscription to the earliest message:

bin/pulsar-admin topics reset-cursor persistent://public/default/my-topic \

    --subscription my-subscription --time 0

This moves the subscription cursor to the very beginning (timestamp 0 as a special value) so the consumer can replay from the start. You can also use a message ID (--messageId) if you have a specific ledger:entry position. This is roughly analogous to Kafka’s --reset-offsets tool but more granular.

  • Deleting Topics: Kafka’s kafka-topics.sh --delete marks a topic for deletion (if enabled). In Pulsar, you can delete topics with:

bin/pulsar-admin topics delete persistent://public/default/my-topic

However, Pulsar won’t delete a topic with active producers/consumers by default. You can force delete with -f (and -d to also delete schema) if needed. Deleting a partitioned topic requires delete-partitioned-topic command (which removes all partitions).

  • Other Handy Commands: A few more worth noting:
    • List Tenants: bin/pulsar-admin tenants list – shows all tenants (Kafka has no direct analog since it’s not multi-tenant by default).
    • List Namespaces: bin/pulsar-admin namespaces list <tenant> – shows namespaces in a tenant (akin to logical group of topics; again, Kafka doesn’t have this concept natively).
    • Examine Messages: bin/pulsar-admin topics peek-messages -s <sub> -n 1 <topic> lets you peek at a subscription’s unacknowledged message(s) without consuming/acking. This is useful to inspect queued messages for a subscription (Kafka doesn’t have an exact equivalent, since unconsumed messages are just those at an offset the consumer hasn't reached).
    • Skip Messages: bin/pulsar-admin topics skip <args> can skip messages on a subscription (acknowledging them without consuming) – helpful for clearing a backlog without reading everything.
    • Shell Completion and Help: pulsar-admin supports --help on any subcommand, and you can use tab completion if configured. The CLI is well-documented, and you can refer to the official docs for all flags.

Example Workflow

Let’s tie it together with a quick example scenario:

  1. Create a Topic: Suppose you want to create a topic for an application, analogous to Kafka. Run pulsar-admin topics create persistent://public/default/app-events.
  2. Produce some messages: Use pulsar-client to send a few test messages:

bin/pulsar-client produce persistent://public/default/app-events \

    --messages "event1","event2","event3"

  1. Each comma-separated string is sent as a separate message.
  2. Consume the messages: In a separate shell, start a consumer:

bin/pulsar-client consume persistent://public/default/app-events -s tester -n 3 -p Earliest

  1. This will receive 3 messages from the tester subscription and then exit. You should see “event1”, “event2”, “event3” output.
  2. Check stats: Now check pulsar-admin topics stats persistent://public/default/app-events. It should show no backlog for subscription "tester" (since we consumed and acked all messages), and it will show the total messages published, throughput, etc..
  3. Experiment with reset: If you run the consumer again with the same subscription, you won’t get any messages (they were already acknowledged). But you can reset the subscription to earliest:

bin/pulsar-admin topics reset-cursor persistent://public/default/app-events \

    --subscription tester --time 0

  1. Now running the consumer again will re-read the messages from the start (very useful for replays, similar to Kafka consumer groups that seek to beginning).

This workflow shows how Pulsar’s CLI can accomplish what Kafka engineers expect, often with even more flexibility (for example, the ability to peek or reset cursors easily).

Key Takeaways

  • Pulsar’s main admin tool is pulsar-admin, which combines the functionality of multiple Kafka scripts (topics, configs, consumer groups) into one CLI. Use pulsar-admin [resource] [operation] format (e.g., topics list, namespaces create, brokers list) to manage the cluster.
  • Pulsar topics are referred to by a full name including tenant and namespace (e.g., persistent://tenant/namespace/topic). Ensure you include the full name in CLI commands to avoid confusion. The default tenant/namespace in a new standalone cluster is public/default.
  • Producing and consuming test messages is easy with pulsar-client (no coding required). This parallels Kafka’s console producer/consumer tools and is great for smoke testing topics.
  • Many Kafka concepts (like consumer group offset resets, topic inspections, etc.) are available via Pulsar CLI: you can reset subscriptions, skip messages, and even peek at messages without consuming – features that can simplify troubleshooting.
  • Pulsar’s CLI embraces Pulsar’s multi-tenancy and segmentation. You’ll routinely use tenant and namespace in commands (unlike Kafka). This guides us into the next post, where we’ll dive into Tenants, Namespaces & Bundles – the foundations of Pulsar’s multi-tenant architecture.
This is some text inside of a div block.
Button Text
Neng Lu
Neng Lu is currently the Director of Platform at StreamNative, where he leads the engineering team in developing the StreamNative ONE Platform and the next-generation Ursa engine. As an Apache Pulsar Committer, he specializes in advancing Pulsar Functions and Pulsar IO Connectors, contributing to the evolution of real-time data streaming technologies. Prior to joining StreamNative, Neng was a Senior Software Engineer at Twitter, where he focused on the Heron project, a cutting-edge real-time computing framework. He holds a Master's degree in Computer Science from the University of California, Los Angeles (UCLA) and a Bachelor's degree from Zhejiang University.
picture of Pengui Li in a city
Penghui Li
Penghui Li is passionate about helping organizations to architect and implement messaging services. Prior to StreamNative, Penghui was a Software Engineer at Zhaopin.com, where he was the leading Pulsar advocate and helped the company adopt and implement the technology. He is an Apache Pulsar Committer and PMC member.
Hang Chen
Hang Chen, an Apache Pulsar and BookKeeper PMC member, is Director of Storage at StreamNative, where he leads the design of next-generation storage architectures and Lakehouse integrations. His work delivers scalable, high-performance infrastructure powering modern cloud-native event streaming platforms.

Newsletter

Our strategies and tactics delivered right to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Learn Pulsar
Pulsar
Kafka