Native Apache Kafka Service Is Coming Soon to StreamNative Cloud. Join the waitlist and get $1,000 in credits.
Join Waitlist >Stream from any Kafka source directly into Delta Lake tables — no ETL, no copies, no delays. Powered by Ursa Engine.
StreamNative is a launch partner for managed Apache Iceberg tables in Databricks Unity Catalog — stream data directly into governed, query-ready Iceberg tables with unified metadata, time travel, and optimized query performance.
• OVERVIEW
StreamNative partners with Databricks to enable enterprises to seamlessly stream data into the Databricks Lakehouse Catalog. With multiple integration options available, organizations can select the best approach for their needs—whether for governance, storage efficiency, or advanced analytics.

Easily stream real-time data into Databricks Lakehouse.
Choose from native Unity Catalog integration, Delta Lake streaming, or Spark connectors.
Ensure high-performance, cost-effective data streaming at scale.

• USE CASES

Effortlessly configure out-of-the-box integration within StreamNative Cloud to stream data directly into Databricks Unity Catalog.
Unified data governance across real-time workloads
Simplified data management for seamless analytics
Use the Lakehouse Connector to sink streaming data into Databricks tables with configurable batching, partitioning, and schema management.
Configurable batch size and interval
Automatic table creation
Schema registry integration
Dead letter queue support
Read and write Pulsar topics from Databricks Spark jobs using the native Pulsar Spark connector for batch and structured streaming workloads.
Structured streaming support
Batch read from Pulsar topics
Exactly-once processing
Checkpoint-based recovery
Connect Databricks Spark jobs to StreamNative using the standard Kafka Spark connector. No code changes required for existing Kafka workloads.
Native Kafka protocol support
No application code changes
Databricks SQL support
Real-time dashboards and alerts
• RESOURCES
Enjoy $200 free credit to start