Unpacking the Latest Streaming Announcements: A Comprehensive Analysis

Jesse Anderson
Managing Director of Big Data Institute.

It’s conference season, and we’re interpreting the latest announcements from the various streaming vendors. This post will consider the recent StreamNative, Confluent, and WarpStream announcements.

Is It Easy Now?

Confluent has never shied away from saying Kafka is “easy,” and I disagree. During the Kafka Summit London Keynote, the speakers said “easy” 17 times; in the Kafka Summit Bangalore Keynote, they said it 18 times. It was said 0 times in the Pulsar Summit EMEA keynote hosted by StreamNative.

We’ll get deeper into the individual announcements, which are primarily related to operational changes. None of the changes by all three companies will affect the ease of architecture or development. This nuance is important because Confluent sends a strong message that will lead management and developers to think things are easy now. They aren’t easy.

It’s All About the Protocol

I’ve been saying for a long time that Kafka’s value is in the protocol, and the protocol will outlive Apache Kafka.

Using Confluent Cloud? You’re using the Kafka protocol to connect to Confluent’s Kora, which, in turn, talks to the Kafka Cluster.

Want to use the Kafka protocol with Pulsar? You can use Ursa from StreamNative to treat Pulsar as a Kafka cluster.

Using WarpStream? You’re using the Kafka protocol to connect the WarpStream agents that support the Kafka protocol and write to S3.

We’re going to see increasing competition in the space from others. I think the key will be the vendor’s support of the Kafka protocol. The leaders are Confluent Cloud and StreamNative Cloud, which support everything. WarpStream doesn’t support transactions.

The other key will be the extra features the new backend gives us. For example, Pulsar's two-tier architecture removes Kafka’s issues with rebalancing as it is rebalance-free. You’ll also get the built-in replication. Confluent mentioned Kafka supporting queuing in version 4.0. You could wait and see how well it works or have your cake and eat it too with Pulsar’s production-worthy queuing support. IMHO, if something is important enough to use queuing, you’d better be sure that queuing works right.

The Keys to Cost

We’ve all heard of the CAP Theorem. StreamNative asked us to consider a new CAP theorem: cost, availability, and performance. When creating streaming systems, we can choose two of the three choices. If we’re going to be focused on cost, we will have to give up availability or performance. I think it’s an interesting way of framing the tradeoffs we deal with in streaming systems.

Pulsar is a more attractive choice in this regard because it allows us to mix different cost types on the same cluster. This mixing is made possible by Pulsar’s separation of brokers and storage layers. Time is bearing out that Pulsar’s storage architecture is standing the test of time.

There isn’t any multi-tenancy on Kafka, and brokers are highly coupled with storage. Confluent Cloud’s Kora simulates multitenancy. You might think about Kora more as a proxy for a Kafka cluster than anything else.

An essential piece of the cost equation is the economy of scale for streaming clusters. At Kafka Summit, one speaker mentioned seeing Kafka clusters 2-3x overprovisioned. This overprovisioning corresponds to my experience in the field, except I usually see multiple Kafka clusters across the organization instead of one with multitenancy—the overprovisioning comes from Kafka’s lack of multi-tenancy and resource separation. Since Pulsar supports multi-tenancy, an enterprise could have a single cluster supporting each team’s load, and a single cluster costs less to maintain than multiple clusters.

It’s worth noting that Kora and Ursa are only available on their respective cloud offerings. However, Pulsar has more built-in functionality, and Kora is adding more necessary functionality to Kafka.

Cost Reductions

StreamNative, Confluent, and WarpStream optimize for cost by focusing on the CA (cost/availability) rather than the CP (cost/performance). The performance tradeoff is using S3 or the cloud provider’s equivalent.

Confluent calls this their Kora Freight cluster. A significant part of a Kafka cluster’s cost arises from the bandwidth costs of replication. Having the data stored directly into S3, S3 will handle the replication. S3 and their equivalents don’t directly charge for replication bandwidth. The tradeoff for the low cost is the high latency.

WarpStream only operates by saving data to S3, which decreases the cost, but is at the mercy of S3’s various performance issues, including higher latency. In these scenarios, a lagging consumer can force their agent (“broker”) to read relatively recent data from S3 instead of a faster local disk.

StreamNative’s Ursa can be configured to use S3. The difference is that you can choose whichever namespaces to be stored directly in S3 (the fundamental breakdown in Pulsar is cluster->tenant->namespace->topic). This cost optimization will allow teams to pick which topics they want: CA, CP, or AP (availability/performance).

Now Analyze It

One of the difficulties of streaming systems has been landing the data somewhere to be analyzed. We’ve had many different ways of writing out a topic’s data to S3, such as Kafka Connect, Pulsar Connect, etc. Each one of these methods had tradeoffs, such as how much to write at once or how only to read data that was finished writing. There was a chasm between the pub/sub-system and the data lake.

Then Apache Iceberg came and changed how we write data. Confluent’s and StreamNative’s strategies are Kafka+Flink+Iceberg and Pulsar+Flink+Iceberg/Delta Lake, respectively. Apache Flink does the processing, while Kafka and Pulsar write directly to S3 in Iceberg format. With batch systems such as Apache Spark, we do not have to start extra processes to write to S3 for reading. This change simplifies the operations of reading real-time data, which translates into cost savings.

While WarpStream writes to S3, it isn’t written in a format for processes other than their agents to read or use. I contacted WarpStream, and they said to stay tuned for forthcoming announcements.

With Databrick’s acquisition of Tabular, which was started by the founders of Iceberg, and Snowflake’s announcement of Polaris Catalog, it looks like we’re in for exciting times. If history repeats, we’re in for another proxy fight as the open source community deals with vendors' competing interests. In these situations, I suggest teams go with the most open option available that supports the most protocols, formats, and technologies. Look for the solution that supports the most possibilities, and you will avoid lock-in.

What Does This Mean to You?

I break these announcements into three categories. If any apply to you, I recommend you take action.

  • Are you discounting other pub/sub systems because they’re not Apache Kafka? It’s about their support of the Kafka protocol, not the system itself.
  • Are you under pressure to be more cost-effective? Explore systems writing directly to S3 or, at minimum, use tiered storage.
  • Are you experiencing the pain of integrating streaming and batch processing? Look at using the built-in Iceberg integrations.

Note: This post was sponsored by StreamNative but did not have editorial control.

Jesse Anderson
Jesse Anderson is a Data Engineer, Creative Engineer, and Managing Director of Big Data Institute.

Newsletter

Our strategies and tactics delivered right to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Kafka
Pulsar
Product Announcements