Architecture Overview
Three Primitives
Services in Orion are composed from three building blocks:
| Primitive | Role | Examples |
|---|---|---|
| Channel | Service endpoint: sync (REST, HTTP) or async (Kafka) | POST /orders, GET /users/{id}, Kafka topic order.placed |
| Workflow | Pipeline of tasks that defines what the service does | Parse → validate → enrich → transform → respond |
| Connector | Named connection to an external system with auth and retries | Stripe API, PostgreSQL, Redis, Kafka cluster |
Channels receive traffic. Workflows process it. Connectors reach out to external systems. Everything else (rate limiting, metrics, circuit breakers, versioning) is handled by the platform.
Deployment Topology
Before Orion
Every piece of business logic is its own service to build, deploy, and operate, each with its own infrastructure stack:
4 services x (code + Dockerfile + CI pipeline + health checks + metrics agent + log agent + sidecar proxy + scaling policy + secret config + canary rollout) = dozens of components to build, wire, and keep running.
After Orion
One Orion instance replaces all four:
No API gateway needed. Governance is built in. One binary to deploy.
The best of both worlds: each channel and workflow is independently versioned, testable, and deployable. The modularity of microservices with the operational simplicity of a monolith.
Deploy Anywhere
Single binary. SQLite by default, no database to provision, no runtime dependencies. Need more scale? Swap to PostgreSQL or MySQL by changing storage.url. No rebuild needed.
Same channel definitions work in any topology: run everything in one instance, split channels across instances with include/exclude filters, or deploy as sidecars.
Request Processing Flow
- Route Resolution: REST pattern matching finds the channel, or falls back to name lookup
- Channel Registry: enforces deduplication, rate limits, input validation, backpressure, and checks the response cache
- Engine: the workflow engine sits behind a double-Arc (
Arc<RwLock<Arc<Engine>>>) allowing zero-downtime swaps - Workflow Matcher: evaluates JSONLogic conditions and rollout percentages to pick the right workflow
- Task Pipeline: executes functions in order (parse, map, filter, http_call, db_read, etc.)
Sync and Async
Sync POST /api/v1/data/{channel} → immediate response
Async POST /api/v1/data/{channel}/async → returns trace_id, poll later
REST GET /api/v1/data/orders/{id} → matched by route pattern
Kafka topic: order.placed → consumed automatically
Sync channels respond immediately. Async channels return a trace ID; poll GET /api/v1/data/traces/{id} for results. Kafka channels consume from topics configured in the DB or config file.
Bridging is a pattern, not a feature. A sync workflow can publish_kafka and return 202. An async channel picks it up from there.
Service Composition
Most platforms require HTTP calls between services, adding latency, failure modes, and serialization overhead. Orion’s channel_call invokes another channel’s workflow in-process with zero network round-trip:
POST /orders (order-processing workflow)
├── parse_json → extract order data
├── channel_call → "inventory-check" channel (in-process)
├── channel_call → "customer-lookup" channel (in-process)
├── map → compute pricing with enriched data
└── publish_json → return combined result
Each composed channel has its own workflow, versioning, and governance, but calls between them are function calls, not network hops. Cycle detection prevents infinite recursion.
Built-in Task Functions
| Function | Description |
|---|---|
parse_json | Parse payload into the data context |
parse_xml | Parse XML payloads into structured JSON |
filter | Allow or halt processing based on JSONLogic conditions |
map | Transform and reshape JSON using JSONLogic expressions |
validation | Enforce required fields and constraints |
http_call | Invoke downstream APIs via connectors |
channel_call | Invoke another channel’s workflow in-process |
db_read / db_write | Execute SQL queries, return rows/affected count |
cache_read / cache_write | Read/write to in-memory or Redis cache |
mongo_read | Query MongoDB collections |
publish_json / publish_xml | Serialize data to JSON or XML output |
publish_kafka | Publish messages to Kafka topics |
log | Emit structured log entries |