sharetwitterlinkedIn

Keytop Delivers Enhanced Parking Experience with Apache Pulsar

head img

Key takeaways

  • Keytop has been working to keep its parking services and message architecture up to date to better seize business opportunities in the growing parking management market.
  • As Keytop's business grew, it was looking for a message bus that could meet its demand for high throughput, low latency, high availability, and great scalability.
  • Keytop redesigned its message system by using Apache Pulsar as the backbone of its architecture.
  • Apache Pulsar's geo-replication feature helps Keytop securely store mission-critical data across multiple data centers.

From traditional parking systems to automated parking solutions, manufacturers have come a long way in delivering smarter parking experiences for drivers. Keytop, one of the leading smart parking solution providers, enables easy parking facilities management and helps maximize parking facility revenue for operators.

Background

The parking management market has been expanding amidst an ever-growing number of vehicle owners and parking facilities. Ongoing and upcoming smart city projects have created room for smart parking systems. To seize the huge market opportunity and accommodate projected business growth, Keytop seeks to maintain a robust architecture to facilitate traffic flows and optimize customer experience.

However, this is no easy task. A smart parking system typically consists of various connected parking management solutions deployed on on-street and off-street parking facilities. For example, it may entail vehicle detection sensors, license plate recognition systems, smart payment infrastructure, and middleware that are networked to transmit information to various channels. Data transmission across different components in the system may pose challenges in terms of high availability, high throughput, scalability, and reliability.

In particular, Keytop needs to handle more than 300 GB of messages daily generated from more than 10,000 parking facilities, hundreds of thousands of IoT devices, millions of parking orders, and massive traffic flow records. As such, it also requires a robust monitoring system that is able to provide its operations team with accurate data observability.

Over the past few years, Keytop has been working to keep its message architecture underpinning and its parking services up to update.

Legacy message architecture: Pain points

Initially, Keytop built a simple message architecture to support its messaging needs. It contained a single channel where data from parking devices were hosted in a public cloud platform. Messages were first transmitted through the MQTT gateway, which was then connected to RocketMQ, so that business systems could consume them.

Architecture 1.0: Single-Channel

With the company expanding its services to meet rapidly growing demands, it upgraded its existing system by adding an identical channel to the architecture. In this way, parking devices could send messages to both channels with data deduplicated in different business systems. This further increased reliability and improved data consistency.

Architecture 2.0: Double-Channel

Although the double-channel design helped ease throughput pressure and increase reliability to some extent, Keytop still struggled to find a solution that was able to keep up with its rapid business growth. Specifically, the smart parking system manufacturer was faced with the following challenges.

  • Performance. “As the number of parking lots we have been serving tops 10,000 globally, our message system hosted on the cloud is not sufficient anymore,” said Song Li, Developer Lead at Keytop’s Chengdu Research and Development center. “We need a new solution that can meet our demand for high throughput and low latency.”
  • Reliability. The cloud platform that Keytop had been using promised a 99.99% uptime commitment, resulting in an approximately 50-minute possible disruption period over a year. This was unacceptable as it might lead to heavy traffic and thus impact customer satisfaction.
  • Costs. To solve the performance issue, Keytop tried to upgrade cloud services and configurations, while the resulting overhead grew exponentially.

Why Apache Pulsar

With these pain points in mind, the team began evaluating technologies that could help solve them. Specifically, it was looking for a message bus that could meet its demand for high throughput, low latency, high availability, scalability, multi-tenancy, and easy operations.

“We have carried out different tests on RocketMQ, Kafka, and Pulsar,” Song noted. To gain a comprehensive understanding of possible options in the market, Song led the team to draw a comparison between these tools in terms of fundamental features, reliability, as well as key advantages and disadvantages. According to their research, Pulsar, as a cloud-native distributed messaging and streaming platform, stood out among the tests as Keytop’s ultimate choice, especially with the following benefits and some key differentiators:

  • Computing and storage separation
  • High throughput, low latency, high availability, and great scalability
  • Multi-tenant isolation
  • Excellent writing and consuming performance
  • Great stability and reliability, even with millions of topics and partitions
  • Support for dead letter queues and message TTL

“At Keytop, we have different business lines with tons of data, which necessitates data isolation,” Song explained. “The multi-tenant design of Pulsar is just what we need. Besides, it provides Pulsar Manager, a visualized tool to manage Pulsar resources, which guarantees a friendly experience for our operations and maintenance team.”

Migrating to Pulsar

In 2020, Song and his team redesigned Keytop’s message system by using Pulsar as the backbone of its architecture.

At first, the team left the double-channel design unchanged while adopting an active-active architecture, where MQTT and RocketMQ were replaced by EMQ and Pulsar respectively. Similarly, their business system needed to deduplicate data when receiving messages.

Architecture 3.0: Active-Active

Here is a closer look at the implementation and architecture:

  • In the upstream flow, parking lots reported data to both channels, where EMQ brokers routed the data to various Pulsar topics. The MQ3 service (Keytop’s internal application) then performed the deduplication, making sure no redundant data were stored.
  • In the downstream flow, business systems submitted message requests to both channels. In the middle, Keytop designed its own sink services that distributed data to topics. Ultimately, the data were deduplicated on the parking lot side.

While this new architecture consistently sustained its business, its design and implementation were starting to reach some limits. As Keytop’s expanding services were all deployed on a single cloud, reliability became its leading concern. “Our business is expanding and we can’t afford to have any network failures. Unfortunately, some cloud services are not always stable,” said Song.

To solve this problem, the team optimized its message architecture again and came up with a cross-cloud approach. Here is what the architecture looks like after the optimization:

Architecture 4.0: Cross-Cloud

The key benefit of the cross-cloud design is that messages in both channels are fully duplicated. Pulsar’s geo-replication mechanism enables the replication of persistently stored data across multiple data centers. In the event of a disruption of one cloud platform, the messages in the other channel still stay intact, minimizing the business impact for the company.

Looking ahead: Further extending the potential of Pulsar

When synchronizing data with its data lake, Song’s team is still faced with some problems in their data pipeline.

“We are using Canal servers to send binlog messages to Kafka. And with Flink, we can send our data to the data lake. However, there are still some performance issues due to data rebalancing in Kafka,” Song reported. “This is also the reason why we are considering replacing it with Pulsar. Unlike other messaging and streaming platforms, Pulsar boasts a separate architecture for computing and storage. Pulsar brokers are stateless and you can easily scale out your cluster.”

As Song’s team continues to integrate Pulsar into their business system, they have decided to look at Pulsar’s natural compatibility with Kubernetes, especially Pulsar’s scalability as a distributed cloud-native platform.

In most parking lots, operators may expect lower throughput during the night, which means resources can be reallocated for better utilization. “To optimize resource allocation, we are also planning to leverage the autoscaling capability of Kubernetes,” said Song. “We have some big data tasks to run. During the night, our Pulsar cluster deployed on Kubernetes can be scaled to make room for the big data tasks, which will greatly improve our efficiency.”

More on Apache Pulsar

Pulsar has become one of the most active Apache projects over the past few years, with a vibrant community that continues to drive innovation and improvements to the project.

© StreamNative, Inc. 2022Apache, Apache Pulsar, Apache BookKeeper, Apache Flink, and associated open source project names are trademarks of the Apache Software Foundation.TermsPrivacy