We are excited to invite you to ApacheCon Asia 2022 to explore the newest tools and tips, and connect with subject-matter experts in various Apache Pulsar-related sessions. The Apache Software Foundation will be holding ApacheCon Asia 2022 online between July 29th and July 31st, 2022. Register now for free to join us for this inspiring three-day event of cutting-edge technologies.
The conference gathers adopters, developers, engineers, and technologists from some of the most influential open source communities in the world. To date, there has been a total of over 200 proposals submitted by presenters from Intel, Huawei, Tencent Cloud, StreamNative, Sina Weibo, vivo, and many more. Nearly 50 of these sessions are related to streaming and messaging, 30 of which are focused on Apache Pulsar-based technologies.
Let’s have a quick look at some of the featured sessions in messaging and streaming, ranging from technical deep dives, best practices, to tutorials, and insights.
FLiPN Awesome Streaming with Open Source (English)
Timothy Spann, Developer Advocate, StreamNative
In this talk, Tim will walk through how to build different types of streaming applications by using Apache NiFi, Apache Flink, Apache Spark, and Apache Pulsar together. The session will demonstrate how to ingest various data and REST feeds to enrich data and send them to Apache Pulsar. Applications will be built on top of the live streaming data with Web socket dashboards, Apache Spark SQL ETL, and Apache Flink continuous SQL.
Introducing TableView: Pulsar's Database Table Abstraction (English)
David Kjerrumgaard, Apache Pulsar Committer, Developer Advocate, StreamNative
In many use cases, applications are using Pulsar consumers or readers to fetch all the updates from a topic and construct a map with the latest value of each key for the messages that were received. The new TableView consumer offers support for this access pattern directly in the Pulsar client API itself and encapsulates the complexities of manually constructing such local caches manually. This talk will demonstrate how to use the new TableView consumer using a simple application and discuss best practices and patterns for using the TableView consumer.
Route to the Next-Generation Message Middleware: How vivo Migrated to Pulsar (Mandarin)
Limin Quan, Jianbo Chen, Big Data Engineer, vivo
vivo has used Kafka to support its business with over 1000 billion messages per day. Now, it has migrated to Apache Pulsar as its next-generation message middleware to handle an even larger amount of data. In this talk, Quan and Chen will share the reasons behind vivo’s choice of Apache Pulsar and how vivo has worked to put Pulsar into practice (for example, migrating plans and troubleshooting tips).
Practice and Optimization: Apache Pulsar in Tencent Cloud (Mandarin)
Xiaolong Ran, Apache Pulsar Committer, Senior R&D Engineer, Tencent Cloud
As Apache Pulsar has been put into production at scale in Tencent Cloud, it is widely used in different scenarios supporting companies and organizations across industries. To bring the experience to another level, Tencent has worked out a series of strategies in terms of optimization and stability. In this talk, Ran will focus on Tencent Cloud's work on performance optimization and shed light upon some of the best practices and troubleshooting tips for using Apache Pulsar.
Apache Pulsar as Lakehouse: Introducing the Lakehouse Tiered Storage Integration for Apache Pulsar (Mandarin)
Hang Chen, Apache Pulsar PMC member, Software Engineer, StreamNative
Currently, tiered storage is introduced to offload cold data, while they are managed by Apache Pulsar in a non-open format. Therefore, it is very difficult to integrate the data into other big data components, such as Flink SQL and Spark SQL. In this talk, Chen will explain how to use Lakehouse to manage offloaded data and integrate it with the cold data offloading mechanism.
Build High-Performance Apache Pulsar with Intel Optane Persistent Memory (Mandarin)
Fenghua Hu, Cloud Software Architect, Intel
Intel Optane persistent memory (PMem) is a revolutionary memory product, which features high performance, large capacity, storage persistence, and more. In this talk, Hu will demonstrate how to use Intel Optane PMem to bring Apache Pulsar’s ability of high throughput and low latency to another level and effectively cope with performance-demanding scenarios.
The Evolution of Apache Pulsar as A Message Queue in Huawei Device (Mandarin)
Lin Lin, Apache Pulsar PMC member, SDE Expert, Huawei Device
Xiaotong Wang, Senior Engineer, Huawei Device
In the cloud-native era, Huawei Device is faced with many challenges in message queue infrastructure, such as maintenance difficulties in different message queue solutions, high overheads, and disaster tolerance ability building. This talk will cover Huawei Device’s experience in redesigning its message queue architecture and present some solutions to these problems.
To learn more about how companies and organizations today leverage Apache Pulsar for streaming and messaging, serverless computing, and mission-critical deployments in production, see other Apache Pulsar-related sessions in streaming and messaging tracks respectively.
How to participate
Register now for free.
As we can see from topics submitted to ApacheCon Asia 2022, Apache Pulsar has become one of the most active Apache projects over the past few years, with a vibrant community that continues to drive innovation and improvements to the project.
- Start your on-demand Pulsar training today with StreamNative Academy.
- Spin up a Pulsar cluster in minutes with StreamNative Cloud. StreamNative Cloud provides a simple, fast, and cost-effective way to run Pulsar in the public cloud.
- Save your spot at the Pulsar Summit San Francisco. The first in-person Pulsar Summit is taking place this August! Sign up today to join the Pulsar community and the messaging and event streaming community.
- Join the Apache Pulsar community. Subscribe to the Pulsar mailing lists for user-related or Pulsar development discussions. You can also join the Pulsar Slack to ask quick questions or discuss specialized topics.