Native Apache Kafka Service Is Coming Soon to StreamNative Cloud. Join the waitlist and get $1,000 in credits.
Join Waitlist >Keep your Kafka APIs while Ursa runs your streams — eliminate cluster sprawl and cut infra cost by up to 95%.
Native Kafka on Ursa. Your tools work as-is.
No leader elections or broker disks. Compute and storage scale independently.
Order-of-magnitude savings in benchmarks.
• OVERVIEW
Kafka Service on StreamNative gives you cloud-native Kafka streaming backed by Ursa, the first lakehouse-native streaming engine for Kafka recognized with a VLDB Best Industry Paper award. You keep the Kafka protocol and ecosystem; Ursa handles durability, scaling, and topic-to-table conversion under the hood.
A native Kafka service, not a protocol translation layer. Your producers, consumers, and tools work as-is.
That also supports pub/sub and streaming from the same API.
That scales to millions of topics and many tenants.
To the lakehouse and, when you’re ready, to agentic AI with Orca.
• CAPABILITIES
Kafka Service on StreamNative gives you cloud-native Kafka streaming backed by Ursa, the first lakehouse-native streaming engine for Kafka. You keep the Kafka protocol and ecosystem; Ursa handles durability, scaling, and topic-to-table conversion under the hood.
Append-only event log consumed in real time. High throughput, low latency, durable retention.
Producers write and consumers read independently. Stateless brokers with smooth, elastic scaling.
Ursa writes events directly into Iceberg/Delta. Same data as topic and table with zero-copy conversion.
Append-only event log consumed in real time. High throughput, low latency, durable retention.
Producers write and consumers read independently. Stateless brokers with smooth, elastic scaling.
Ursa writes events directly into Iceberg/Delta. Same data as topic and table with zero-copy conversion.
• USE CASES
Build microservices that communicate via Kafka topics instead of brittle RPC calls. Services publish domain events, others subscribe and react at their own pace. When you add Orca Agent Engine, agents can observe the same topics and act under policies.
Use Kafka Service as the front door for your lakehouse. Ingest events into Kafka topics, let Ursa write them straight into Iceberg/Delta tables, and query in your existing engines. Kill off nightly batch jobs and move to continuous streams-to-tables.
Use Kafka topics as the event feed for your feature pipelines, vector DB upserts, and RAG index updates. Ursa’s lakehouse writes provide long-term, queryable history for training and analysis. When you turn on Orca, agents watch topics and tables to make decisions and call tools.
Replace self-managed Kafka clusters with a single managed service—cut costs by up to 95% without app changes.
• SUCCESS STORIES
StreamNative is built for teams at the intersection of real-time systems, data, and AI initiatives.
• DEPLOYMENT OPTIONS
Kafka Service is part of StreamNative Data Streaming Platform and is available in four deployment models:

Fully managed Kafka Service, auto-scaling up and down with traffic. You just send records and pay for what you use.
Auto-scales with traffic, no capacity planning
Pay only for what you use
Dedicated clusters operated by StreamNative for teams that need isolation, predictable capacity, or specific SLAs.
Single-tenant isolation with dedicated resources
Predictable capacity and SLA guarantees
Run Kafka Service in your own cloud account (AWS, Azure, GCP). Data stays in your VPC; StreamNative runs the control plane.
Data stays in your VPC, full network control
StreamNative manages operations remotely
Cloud-like operations in your own environment for the strictest security and compliance needs.
Runs on your own infrastructure
Meets strictest compliance requirements
No. StreamNative runs native Apache Kafka — a fork of Apache Kafka 4.2+, not a compatibility layer. Point bootstrap.servers at the StreamNative endpoint and keep your existing clients and tools with zero code changes.
With self-managed Kafka, brokers own storage, you over-replicate across AZs, and you constantly rebalance and grow disks. With Kafka on StreamNative, cost-optimized topics run with stateless brokers, object-storage durability, and no leader elections. Latency-optimized topics retain Kafka's full feature set with disk-based storage. Both profiles coexist in the same cluster, and every topic is simultaneously a lakehouse table — no connectors needed.
Internal benchmarks show Ursa sustaining 5 GB/s Kafka workloads at around $50/hour in infra spend—around 5% of the cost of some traditional engines—which translates to up to 95% lower Kafka infra cost in certain scenarios. Your exact savings depend on scale, topology, and retention settings.
Both Kafka Cluster and Pulsar Cluster run on the same Ursa engine. Choose Kafka Cluster when your workload is Kafka-native. Choose Pulsar when you need advanced messaging patterns (dead-letter queues, delayed messages, shared/failover/key-shared subscriptions) or multi-tenant isolation. Many customers run both.