The Apache Pulsar community releases version 2.8.1! 49 contributors provided improvements and bug fixes that delivered 213 commits.
Highlights of this release are as below:
This blog walks through the most noteworthy changes grouped by component. For the complete list including all features, enhancements, and bug fixes, check out the Pulsar 2.8.1 Release Notes.
Issue: Previously, when setting precise publish rate limits on topics, it did not work.
Resolution: Implemented a new
RateLimiter using the
Issue: Messages with the same keys were out of order when message redelivery occurred on a Key-Shared subscription.
Resolution: When sending a message to
messagesToRedeliver, the broker saved the hash value of the key. If the dispatcher attempted to send newer messages to the consumer that had a key corresponding to any one of the saved hash values, they were added to
messagesToRedeliver instead of being sent. This prevented messages with the same key from being out of order.
Issue: Previously, when there were producers with the same name, an error would be triggered and the old producer would be removed even though it was still writing to a topic.
Resolution: Validated producers based on a connection ID (local & remote addresses and unique ID) and a producer ID within that connection rather than a producer name.
Issue: Previously, when a producer continued to reconnect to a broker, the fenced state of the topic was always set to true, which caused the topic to be unable to recover.
Resolution: Add an entry to
ManagedLedgerException when the polled operation is not equal to the current operation.
Issue: Previously, when subscribing to a topic with the earliest position, data would be lost because
ManagedLedger used a wrong position to initialize a cursor.
Resolution: Added a test to check a cursor's position when subscribing to a topic with the earliest position.
Issue: Previously, when messages were added to an incoming queue, a deadlock might occur. The deadlock might happen in two possible scenarios. First, if the message was added to the queue before the message was read. Second, if
readNextAsync was completed before
future.whenComplete was called.
Resolution: Used an internal thread to process the callback of
Issue: Previously, the broker ran out of memory when calling the
Resolution: Added the
entry.release() call to the
Issue: Previously, when a topic had only non-durable subscriptions, the compaction was not triggered because it had 0 estimated backlog size.
Resolution: Used the total backlog size to trigger the compaction. Changed the behavior in the case of no durable subscriptions to use the total backlog size
Issue: Repeatedly opening and closing consumers with a Key-Shared subscription might occasionally stop dispatching messages to all consumers.
Resolution: Moved the mark-delete position and removed the consumer from the selector before calling
Issue: Previously, the request ledger was not checked whether it belonged to a consumer’s connected topic, which allowed the consumer to read data that does not belong to the connected topic.
Resolution: Added a check on the
ManagedLedger level before executing read operations.
Issue: Previously, the retention policy did not work because it was not set in the
Resolution: Set the retention policy in the
managedLedger configuration to the
onUpdate listener method.
Issue: Previously, data might be lost if there were no durable subscriptions on topics.
Resolution: Leveraged the topic compaction cursor to retain data.
Issue: Previously, there was a memory leak of outgoing TCP connections in the Pulsar proxy because the
ProxyConnectionPool instances were created outside the
PulsarClientImpl instance and not closed when the client was closed.
Resolution: Shut down the
Issue: Previously, the exception
GeneratedMessageV3 is not assignable was thrown when using a Protobuf schema.
Resolution: Added the relevant dependencies to the Pulsar instance.
Issue: Previously, partitioned-topic consumers did not clean up the resources when failing to create consumers. If this failure occurred with non-recoverable errors, it triggered a memory leak, which made applications unstable.
Resolution: Closed and cleaned timer task references.
Issue: Previously, there was a race condition between 2 threads when one of the individual consumers was in a "paused" state and the shared queue was full.
Resolution: Validated the state of the shared queue after marking the consumer as "paused". The consumer is not blocked if the other thread has emptied the queue in the meantime.
Issue: Previously, consumers were blocked when
Consumer.batchReceive() was called concurrently by different threads due to a race condition in
ConsumerBase to allow batch timer,
MultiTopicsConsumerImpl to submit work in a single thread.
Issue: Previously, deadlock might happen when custom logging was enabled in the Python client.
Resolution: Detached the worker thread and reduced log level.
If you are interested in learning more about Pulsar 2.8.1, you can download and try it out now!
The first-ever Pulsar Virtual Summit Europe 2021 will take place in October. Register now and help us make it an even bigger success by spreading the word on social media!
To get started, you can download Pulsar directly or you can spin up a Pulsar cluster with a free 30-day trial of StreamNative Cloud! We also offer technical consulting and expert training to help get your organization started. As always, we are highly responsive to your feedback. Feel free to contact us if you have any questions at any time. We look forward to hearing from you and stay tuned for the next Pulsar release!