March 19, 2025
15 min

UniLink: Your Universal “Tableflow” for Kafka—At Your Fingertips

Sijie Guo
Co-Founder and CEO, StreamNative

Confluent’s recent announcement of Tableflow’s general availability has sparked renewed enthusiasm around bridging Apache Kafka® with popular data lakehouses in real time. And for good reason: this release underscores how critical it is for organizations to have direct, seamless pipelines between streaming data and analytics platforms.

However, there’s a catch: Tableflow only works with Confluent Cloud. If you’re outside the Confluent ecosystem—or simply don’t want to be locked into it—where can you turn? Enter StreamNative’s Universal Linking (UniLink). Think of it as a universal “Tableflow,” enabling you to connect any Kafka cluster to any data lakehouse in real time.

Below, we’ll walk through what UniLink is, how it works, and how you can easily set it up to link data from Kafka topics to your favorite lakehouse—no Confluent lock-in required.

What Is Universal Linking?

UniLink is a platform-agnostic solution designed to move data between different Kafka clusters and modern data lakehouses (powered by Iceberg or Delta Lake) in real time. It unifies your data flow without forcing you into a specific cloud provider or proprietary environment.

Key Capabilities:

  • Full-Fidelity Replication
    UniLink captures every element—topics, offsets, consumer groups, schemas, and configurations—to create an exact copy of your Kafka topics. By preserving data integrity down to the byte, we eliminate replication drift, ensuring each environment behaves exactly the same.

  • Cost-Effective Replication
    UniLink replicates data from your Kafka clusters into a lakehouse efficiently by leveraging a powerful stream format, built as part of the Ursa Engine for object storage. This approach cuts streaming costs with smart zone-aware reads and direct object storage integration. By streaming data directly to cost-efficient cloud storage, you bypass broker bottlenecks, reduce cross-AZ transfer fees, and lower infrastructure overhead.

  • Universal Interoperability
    Connect any Kafka cluster (Confluent, MSK, Apache Kafka, Redpanda, etc.) to any lakehouse powered by Iceberg or Delta Lake. Whether you’re on-prem, in a multi-cloud environment, or both, UniLink simplifies your data architecture without tying you to a single vendor.

Where UniLink Excels

Below is a quick look at how UniLink compares with Confluent’s Tableflow on some of its most prominent features.

Effortless Real-Time Data Movement

UniLink allows you to effortlessly stream data from your Kafka topics to modern data lakehouses powered by Delta Lake or Apache Iceberg without being confined to a single cloud provider. Whether you’re on-prem, in a different major cloud, or in a hybrid environment, UniLink works seamlessly—truly universal.

Eliminating Data Silos

UniLink is designed to unify data pipelines, ensuring teams can access real-time insights without complex workflows. But unlike Tableflow, you can unify data across any Kafka cluster—Confluent, MSK, Redpanda, or self-managed Kafka—into any data lakehouses powered by Delta Lake or Apache Iceberg. Eliminating vendor lock-in and future-proofing your data streaming platform.

Achieving Real-Time Insights at Scale

UniLink provides high-throughput, low-latency data replication at a fraction of the cost of Tableflow. Under the hood, it leverages StreamNative’s Ursa engine to handle massive data volumes with robust performance guarantees at 10x lower cost. Scale up or down as your business grows without worrying about infrastructure costs.

Simplifying Pipelines for Faster Outcomes

UniLink eliminates complexities in Kafka-to-analytics pipelines, making them easier to build and maintain without being locked into Confluent’s proprietary environment. You can keep your existing Kafka deployments, DevOps tools, and data platforms—no re-architecture required.

Why Lock Yourself Into a Single Vendor?

If you want all the benefits of Tabeflow without being locked into a single vendor, then UniLink is for you. With UniLink, you have the freedom to use any Kafka vendor you want. In an era where the Kafka landscape is evolving rapidly, keeping your options open makes sense.

With UniLink, you can:

  • Connect any Kafka distribution.
  • Send data to on-prem or cloud-based analytics platforms or lakehouses.
  • Avoid the heavy lifting of migrating everything to Confluent Cloud.
  • Simplify operations by managing fewer specialized tools.

What’s Under the Hood: Ursa Stream Storage

UniLink’s “secret sauce” is Ursa Stream Storage—a headless, multi-modal storage layer built on object storage and open table formats (Apache Iceberg or Delta Lake). Internally, it stores data in Parquet files and can present those files as either continuous streams or as well-organized, compacted tables.

Curious to learn more? Check out The Evolution of Log Storage in Modern Data Streaming Platforms to learn more about Ursa, and how its efficient use of infrastructure makes it the lowest cost Kafka solution on the market today.

Unified Governance with Unity Catalog, Snowflake Open Catalog & AWS S3 Tables

UniLink isn’t just about moving data between Kafka and lakehouses. It also integrates natively with popular data catalogs that support Iceberg and/or Delta Lake, uniting real-time streaming and analytical data under a single governance model. Specifically, UniLink works with Databricks Unity Catalog, Snowflake Open Catalog, AWS S3 Tables.

By leveraging UniLink to replicate your Kafka topics as lakehouse tables in these catalogs, you achieve:

  • Centralized Policies & Access Control
    Define and apply consistent security, lineage, and compliance rules once, instead of duplicating them across multiple systems.

  • Schema & Metadata Discovery
    A single “source of truth” for data definitions in both real-time streaming and batch environments, boosting data reliability and usability.

  • Reduced Data Silos
    Break down barriers between streaming and analytics teams; everyone has a unified view of the data, enabling faster insights and easier collaboration.

  • Open Standard Formats
    Since Ursa Engine writes data in Iceberg or Delta Lake by default, any compatible downstream engine—Databricks, Snowflake, AWS Athena, and more—can instantly query your latest streaming data.

Why Now Is the Perfect Time to Go Universal

Confluent’s announcement has spotlighted the importance of bridging Kafka and analytics seamlessly. If you’ve been evaluating solutions for real-time data pipelines, there’s no better moment to consider UniLink. Keep your options open by choosing a truly universal solution that fits your existing environment and future plans.

In a Nutshell

Tableflow: A solid step for Confluent Cloud users who want direct pipelines from Kafka to data warehouses and lakehouses.

UniLink: Everything Tableflow aims to do—plus support for any Kafka cluster, with no forced move to Confluent Cloud.

If you need real-time data replication, analytics, and streaming at scale, but want to avoid the cost and complexity of a single-vendor ecosystem, UniLink is your ready-to-roll universal alternative.

Take the Next Step

Ready to ride this data-streaming wave on your terms? Check out Universal Linking and discover how it can unlock the full potential of your existing Kafka infrastructure—without forcing you to Confluent.

Learn More About Universal Linking:

Make the most of this exciting moment in data streaming, and harness the freedom, flexibility, and universal interoperability your business deserves!

This is some text inside of a div block.
Button Text
Sijie Guo
Sijie’s journey with Apache Pulsar began at Yahoo! where he was part of the team working to develop a global messaging platform for the company. He then went to Twitter, where he led the messaging infrastructure group and co-created DistributedLog and Twitter EventBus. In 2017, he co-founded Streamlio, which was acquired by Splunk, and in 2019 he founded StreamNative. He is one of the original creators of Apache Pulsar and Apache BookKeeper, and remains VP of Apache BookKeeper and PMC Member of Apache Pulsar. Sijie lives in the San Francisco Bay Area of California.

Related articles

Mar 21, 2025
20 min

Definitive Guide for Streaming Data into Snowflake – Part 2: Lakehouse-Native Data Streaming with Apache Iceberg and Snowflake Open Catalog

Mar 17, 2025
15 min

Announcing Ursa Engine GA on AWS: Leaderless, Lakehouse-Native Data Streaming That Slashes Kafka Costs by 95%

No items found.

Newsletter

Our strategies and tactics delivered right to your inbox

Thank you! Your submission has been received!
Oops! Something went wrong while submitting the form.
Data Lakehouse
Geo-replication
UniLink
Ursa