Native Apache Kafka Service Is Coming Soon to StreamNative Cloud. Join the waitlist and get $1,000 in credits.

Join Waitlist >
StreamNative Logo

Universal Linking — Replicate Kafka and Turn Topics into Tables, Anywhere

Mirror any Kafka cluster into Ursa with full fidelity and turn topics into Iceberg/Delta tables — zero ETL, no lock-in.

Universal Linking visual

Any Kafka, Any Lakehouse

Link any Kafka to any Iceberg/Delta lakehouse.

Instant Topics to Tables

Topics become Iceberg/Delta tables automatically.

Lower Cost, No Lock-in

Fraction-of-cost replication, no vendor lock-in.

CAPABILITIES

What You Get With Universal Linking

Full-fidelity Kafka replication

Byte-for-byte replication from any Kafka-compatible source into Ursa. Captures topics, partitions, offsets, consumer groups, schemas, ACLs, and configs to create an exact copy. Preserves consumer offsets so you can cut over without reprocessing or losing position.

Topics to tables with zero ETL

Once data is in Ursa, UniLink automatically materializes Kafka topics as Iceberg/Delta tables. Ursa's stream storage stores data in Parquet on object storage and exposes it simultaneously as a continuous stream and well-organized, compacted tables — without extra pipelines.

Universal interoperability, no vendor lock-in

Platform-agnostic by design. Any Kafka (Confluent Cloud, MSK, Redpanda, self-managed), any environment (on-prem, single cloud, multi-cloud, hybrid), and any Iceberg/Delta lakehouse (Databricks, Snowflake, AWS Athena, and more via open table formats).

Cost-effective replication, fan-out, and tiered storage

Zone-aware reads minimize cross-AZ traffic from source Kafka clusters. Direct object-storage writes stream data into Ursa's Parquet-backed storage, skipping the broker-as-bottleneck design. Tiered retention lets you shorten retention on source clusters and keep long-term history in Ursa at much lower cost.

Built into One StreamNative Platform and Ursa

Replicated data lands in Ursa, the lakehouse-native streaming engine for Kafka. Streams are usable as Kafka topics (via Kafka Service) and as Iceberg/Delta tables for analytics. Add Orca Agent Engine and agents can observe these topics/tables in real time under policy and audit.

USE CASES

What You Can Do with Universal Linking

01

Kafka migration & consolidation

Link any source cluster (on-prem, Confluent, MSK, Redpanda, etc.) to Ursa, start continuous replication, and let both clusters run in parallel. Once offsets are in sync, redirect consumers to Kafka Service on Ursa. Because UniLink preserves consumer offsets, schemas, and configs, you keep application behavior identical while you consolidate clusters or move to the cloud.

02

Real-time lakehouse-native analytics

Continuously replicate Kafka topics into Iceberg/Delta tables in Ursa. Register those tables in Unity Catalog, Snowflake Open Catalog, AWS S3 Tables, etc., for unified governance and discovery. Use your favorite engines (Spark, Trino, Databricks, Snowflake, Fabric, Athena, etc.) to query up-to-date tables for BI and AI. Shift analytics and AI left — from nightly Kafka dumps to continuous, governed topics-to-tables.

03

Lakehouse tiered storage & cost-efficient fan-outs

Offload long-term retention from expensive Kafka brokers into Ursa's object-storage-backed tables. Spin up isolated Kafka endpoints on Ursa for analytics, testing, or per-team read workloads, without duplicating data. Keep data stored once while serving many independent consumers. Scale read traffic and dev/test environments without multiplying clusters and storage.

04

Disaster recovery & global streaming mesh

Replicate from your primary Kafka cluster into a secondary Ursa cluster in real time. In a failure, flip workloads to the replica with preserved offsets — no data loss or blind rewinds. Store Kafka data once in Ursa, then spin up read-only Kafka endpoints in any region to serve local apps and satisfy data residency. Get a global data streaming mesh while maintaining centralized governance and storage economics.

HOW IT WORKS

Link → Sync → Consume

Universal Linking bridges your Kafka streams to open table formats in three simple steps.

Link

Connect your existing Kafka cluster—Apache Kafka, Confluent, MSK, or any compatible system—with a few clicks.

Sync

Choose your topics and target table format (Iceberg or Delta). Universal Linking handles schema mapping and exactly-once delivery.

Consume

Query your streaming data as tables from Spark, Trino, Flink, or any engine that speaks Iceberg/Delta.

FAQs

Universal Linking (UniLink) is StreamNative's federation mechanism for Kafka. It mirrors any Kafka cluster — Confluent, MSK, Redpanda, or self-managed — into the Lakestream catalog with full fidelity (topics, consumer groups, schemas). Once data is in Ursa, stream-table duality makes it automatically available as Iceberg or Delta Lake tables — without tying you to a specific Kafka vendor or cloud.

No. UniLink is designed to work with existing Kafka deployments — Confluent, MSK, Redpanda, or self-managed. You keep your current clusters; UniLink replicates into your lakehouse via Ursa storage, where you can run workloads, analytics, and migrations at your own pace.

Other 'topics to tables' offerings copy data from Kafka into a lakehouse — creating a second copy inside a single vendor's ecosystem. Universal Linking takes a fundamentally different approach: it federates any Kafka cluster into a Lakestream catalog, where Ursa's stream-table duality makes every topic natively available as an Iceberg or Delta Lake table. There is no copy step, no single-vendor lock-in, and the data is in open formats from the start.

UniLink leverages Ursa Stream Storage with zone-aware reads and direct object-storage writes. That design removes the broker as a bottleneck, cuts cross-AZ traffic, and enables high-throughput, low-latency replication at significantly lower cost than traditional approaches; the UniLink blog calls out roughly 10x savings in some scenarios. Actual results vary by workload and topology.

UniLink's primary target is Ursa — once data is in Ursa, you can expose it as Kafka topics (Kafka Service) or tables for analytics. That makes it ideal for migrating from existing Kafka clusters into StreamNative, building DR topologies, and powering lakehouse and AI use cases. If you need additional cluster-to-cluster patterns, you can layer those on the replicated data in Ursa.

Think of UniLink, UniConn, and Orca as layers: UniLink replicates Kafka and turns topics into tables in Ursa. Universal Connect (UniConn) connects external systems into/out of Kafka & Pulsar using connectors. Orca Agent Engine runs and governs agents on top of streams and tables. Together, they give you data in motion, topics-to-tables, and agents, with open formats and no lock-in.

It’s easy to get started with Universal Linking

  • Register a source Kafka cluster (Confluent, MSK, Redpanda, self‑managed).
  • Pick topics and consumer groups to replicate.
  • Choose lakehouse & catalog options (Iceberg/Delta; Unity Catalog, Snowflake Open Catalog, AWS S3 Tables, etc.).
  • Start linking and watch topics appear as streams and tables in Ursa.