By clicking "Accept all cookies" you agree to have cookies stored on your device to improve site navigation, analyze site usage, and assist with our marketing efforts. See our privacy policy for more information.
Orca is a streaming-native agent runtime that lets you deploy, coordinate, and scale autonomous AI agents on real-time Kafka and Pulsar streams. It’s framework-agnostic, tool-ready via MCP, and built with state, registry, and observability for production.
Agents live in your streams, not behind request/response. They react the moment events occur, and you can replay topics to test fixes or explain outcomes.
Live, Fresh Context
Every decision starts with an event, so agents operate on current state—not nightly snapshots. That keeps reasoning grounded and responsive as conditions change.
Scalable Orchestration
Build, deploy, and operate event-driven agents on a unified streaming backbone (Pulsar or Kafka). Scale from one clever agent to a coordinated, collaborative fleet.
Secure, Governed Flows
Get full visibility and control over topics, tools, and actions—with policy guardrails, scoped secrets and RBAC, plus audit from "event seen" to "action taken".
What is Orca?
Orca Agent Engine is an event-driven runtime that embeds agents in your streams so they can perceive events, recall context, plan with rules/LLMs, and execute actions—continuously.
Event Bus Integration
Subscribe to real-time Kafka or Pulsar streams with schemas and consumer groups; guarantee ordering, backpressure, and durability; deliver events to agents.
Tool Invocation via MCP
Agents call event-driven functions, internal services, and SaaS APIs through one secure interface—with auth, quotas, retries, and audit.
Registry & Deployment
Catalog inputs, outputs, versions, and owners; discover reusable agents and tools; roll out or roll back safely with policy and approvals.
Observability & Governance
OpenTelemetry traces, metrics, and logs; PII filtering, scoped secrets, RBAC, quotas; end-to-end audits from event seen to action taken.
Replayable State & Memory
Restore state from the event log, backtest logic, and replay inputs deterministically; keep vector/context memory durable across restarts.
Bring-Your-Own Frameworks
Run agents built with Google ADK or OpenAI—no proprietary SDK. Connect to streams, discover tools, and scale as event-driven services.
How Orca works
Orca runs agents inside your streams: configure sources, deploy and register agents/tools, let them reason on fresh context, act via events or MCP calls, and observe with traces, audit, and replay.
1
Configure
Subscribe to Kafka or Pulsar topics with schemas.
2
Deploy
Package the agent and deploy it as an event-driven function; register inputs/outputs, secrets, and permissions, then scale horizontally.
3
Reason
Use LLMs over live events and durable memory to choose the next best action.
4
Act
Emit new events or invoke tools via MCP with retries and idempotency built in.
5
Observe
Trace end-to-end with OpenTelemetry, audit decisions and tool calls, and replay streams to test improvements safely.
How teams use Orca
Each pattern benefits from always-on streams, replay, and loose coupling.
Orchestrator–Worker
A coordinator publishes tasks; workers in a consumer group process partitions; results flow back on a response topic.
Hierarchical
High-level agents decompose goals into sub-events for lower-level agents; all coordination remains event-based.
Blackboard
Agents share intermediate facts on a shared topic; peers react to new knowledge without direct RPCs.
Market-Based
Bids/asks are events; a matcher (agent) emits awards; agents scale without N×N connections, mixing with existing workflows, data pipelines
A streaming-native infrastructure layer to deploy, coordinate, and scale AI agents in production—not another framework. Agents run on an event bus, maintain state, and communicate via streams.
How is Orca different from agent frameworks?
Frameworks help you build an agent; Orca runs them reliably at scale with streaming I/O, state, registry, and governance.
Which agent frameworks are supported?
OpenAI Agents and Google ADK today (Python), with LangChain/LlamaIndex compatibility planned.
How do agents use tools?
Through Model Context Protocol (MCP) so tools/services are discoverable, governed, and auditable.
How do I get access?
Contact us to request preview access on StreamNative Cloud.