StreamNative Introduces Lakestream Architecture and Launches Native Kafka Service
Read Announcement > Sign Up Now >Ursa Engine writes every event directly to Iceberg and Delta Lake tables — queryable in seconds, not hours.
Organizations across industries use StreamNative
• CHALLENGES
Batch connectors and staging tables add hours of delay between event and insight. Analysts work on yesterday's data.
Proprietary storage formats prevent query engine portability. Switching analytics tools means costly migration.
Running Kafka for streaming and a separate system for lakehouse ingestion creates operational overhead and data duplication.
Dashboards and ML models train on data that's hours or days old. Business decisions are based on outdated information.
• THE WAY OUT
Every event written to a topic simultaneously lands as a row in your lakehouse — no connectors, no staging, no batch windows.

Every event in Kafka or Pulsar simultaneously lands as a row in Iceberg or Delta tables. No staging, no connectors.
Iceberg and Delta Lake tables work with Databricks, Snowflake, Trino, and Spark. No format lock-in.
BI dashboards and ML pipelines see fresh data within seconds, not hours.
• HOW IT WORKS
A three-layer model: Data - Metadata - Protocol
Events land durably in a write-ahead log and compact into Parquet with atomic catalog updates. Query engines see fresh + historical data through a union read path. Choose latency- or cost-optimized mode per stream.
LEARN MOREA streaming-aware catalog tracks schemas and a streaming offset index that maps offsets to WAL/Parquet files. That enables high performance ingestion and unified governance across streams and tables.
LEARN MOREStateless services speak Kafka or Pulsar protocols and translate client calls to storage operations. Because brokers are stateless, you scale compute and storage independently, add capacity in seconds, and maintain native client support.
LEARN MORE• RELATED TOOL
Ursa is the leaderless, diskless engine behind StreamNative — delivering direct-to-table writes, native Kafka support, and elastic scaling from a single unified platform.
Ursa decouples compute from disk-free, leaderless brokers to enable instant failover, elastic scaling, and lower costs.
Ursa ensures exactly-once ingestion and unified data consistency through range-based indexing and atomic Parquet commits.
Ursa provides native Kafka via stateless brokers with tunable storage for optimized latency or cost.
How Streaming Lakehouse stacks up against Kafka > ETL > Lakehouse, Kafka Tiered Storage, and Streaming Databases across data copies, freshness, compatibility, analytics, and ops.
Why Streaming Augmented Lakehouse (SAL) wins
One system, one copy. SAL writes streams directly to Iceberg/Delta as Parquet using a unified catalog. By serving streams and tables from the same bytes, it eliminates connectors, slashes costs, and enables instant analytics.
Yes. StreamNative provides native Apache Kafka service powered by Ursa. Your existing Kafka producers and consumers connect without code changes — just point them at your StreamNative cluster endpoint.
No. With Ursa's stream-table duality, every stream is simultaneously a lakehouse table. Ursa writes every event directly to Iceberg or Delta Lake tables as it arrives — no connectors to deploy, manage, or monitor.
StreamNative supports Apache Iceberg and Delta Lake. Data lands in open Parquet files under your catalog, queryable from Databricks, Snowflake, Trino, Spark, and any engine that reads these formats.
Data is queryable within seconds of being produced. Ursa writes to the write-ahead log and compacts into Parquet with atomic catalog updates, giving you sub-minute freshness for analytics and ML.
For cost-optimized topics, Ursa's leaderless, diskless brokers are stateless — compute and storage scale independently. Adding capacity takes seconds, failover takes seconds, and data is durably stored in object storage with eleven nines of durability. Latency-optimized topics retain Kafka's full replication model for the lowest latency.