Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Orion is a declarative services runtime written in Rust. Instead of writing, deploying, and operating a microservice for every piece of business logic, you declare what the service should do, and Orion runs it. Architectural governance — observability, rate limiting, circuit breakers, versioning, input validation, and more — is built in.

AI generates workflows, Orion provides the governance. The platform guarantees that every service gets health checks, metrics, retries, and error handling, regardless of how the workflow was created.

Is Orion Right for You?

If you need to…Orion?Why
Turn business rules into live REST/Kafka servicesYesDefine logic as JSON workflows, deploy with one API call
Let AI generate and manage business logicYesBuilt-in validation, dry-run testing, and draft-before-activate safety
Replace a handful of single-purpose microservicesYesOne instance handles many channels, governance included
Use a rule engine like DroolsNot quiteOrion uses JSONLogic. Lightweight and AI-friendly, but not a full RETE-based rule engine
Embed a workflow engine library in your appNoOrion is a standalone runtime. For an embeddable engine, see dataflow-rs
Orchestrate long-running jobs (hours/days)NoUse Temporal or Airflow. Orion is optimized for request-response and event processing
Run a full API gateway with plugin ecosystemNoUse Kong or Envoy. Orion focuses on service logic, not proxy features

Three Primitives

You build services in Orion with three things:

┌─────────────┐       ┌──────────────┐       ┌─────────────┐
│   Channel   │──────▶│   Workflow   │──────▶│  Connector  │
│  (endpoint) │       │   (logic)    │       │  (external) │
└─────────────┘       └──────────────┘       └─────────────┘
PrimitiveWhat it isExample
ChannelA service endpoint: sync (REST, HTTP) or async (Kafka)POST /orders, GET /users/{id}, Kafka topic order.placed
WorkflowA pipeline of tasks that defines what the service doesParse → validate → enrich → transform → respond
ConnectorA named connection to an external system, with auth and retriesStripe API, PostgreSQL, Redis, Kafka cluster

Design-time: define channels, build workflows, configure connectors, test with dry-run, manage versions, all through the admin API.

Runtime: Orion routes traffic to channels, executes workflows, calls connectors, and handles observability automatically.

Your First Service in 2 Minutes

No code. No Dockerfile. No CI pipeline.

1. Start Orion

brew install GoPlasmatic/tap/orion-server   # or: curl installer, cargo install
orion-server

2. Create a workflow (AI-generated or hand-written)

curl -s -X POST http://localhost:8080/api/v1/admin/workflows \
  -H "Content-Type: application/json" \
  -d '{
    "workflow_id": "high-value-order",
    "name": "High-Value Order",
    "condition": true,
    "tasks": [
      { "id": "parse", "name": "Parse payload", "function": {
          "name": "parse_json",
          "input": { "source": "payload", "target": "order" }
      }},
      { "id": "flag", "name": "Flag order",
        "condition": { ">": [{ "var": "data.order.total" }, 10000] },
        "function": {
          "name": "map",
          "input": { "mappings": [
            { "path": "data.order.flagged", "logic": true },
            { "path": "data.order.alert", "logic": {
              "cat": ["High-value order: $", { "var": "data.order.total" }]
            }}
          ]}
      }}
    ]
  }'

# Activate it
curl -s -X PATCH http://localhost:8080/api/v1/admin/workflows/high-value-order/status \
  -H "Content-Type: application/json" -d '{"status": "active"}'

3. Create a channel (the service endpoint)

curl -s -X POST http://localhost:8080/api/v1/admin/channels \
  -H "Content-Type: application/json" \
  -d '{ "channel_id": "orders", "name": "orders", "channel_type": "sync",
        "protocol": "http", "route_pattern": "/orders",
        "methods": ["POST"], "workflow_id": "high-value-order" }'

# Activate
curl -s -X PATCH http://localhost:8080/api/v1/admin/channels/orders/status \
  -H "Content-Type: application/json" -d '{"status": "active"}'

4. Send a request — your service is live

curl -s -X POST http://localhost:8080/api/v1/data/orders \
  -H "Content-Type: application/json" \
  -d '{ "data": { "order_id": "ORD-9182", "total": 25000 } }'
{
  "status": "ok",
  "data": {
    "order": {
      "order_id": "ORD-9182",
      "total": 25000,
      "flagged": true,
      "alert": "High-value order: $25000"
    }
  }
}

That’s it. Rate limiting, metrics, health checks, and request tracing are already active. Change the threshold? One API call. No rebuild, no redeploy, no restart.

What’s Built In

Every channel gets production-grade features without writing a line of code:

FeatureWhat it doesConfiguration
Rate limitingThrottle requests per client or globallyrequests_per_second, burst, JSONLogic key
TimeoutsCancel slow workflows, return 504timeout_ms per channel
Input validationReject bad requests at the boundaryJSONLogic with headers, query, path access
BackpressureShed load when overwhelmed, return 503max_concurrent (semaphore-based)
CORSControl browser cross-origin accessallowed_origins per channel
Circuit breakersStop cascading failures to external servicesAutomatic per connector, admin API to inspect/reset
VersioningDraft → active → archived lifecycleAutomatic version history, rollout percentages
ObservabilityPrometheus metrics, structured logs, distributed tracingAlways on, zero configuration
Health checksComponent-level status with degradation detectionGET /health, automatic
DeduplicationPrevent duplicate processing via idempotency keysIdempotency-Key header, configurable window
Response cachingCache responses for identical requestsTTL-based, configurable cache key fields

Performance

7K+ workflow requests/sec on a single instance (Apple M2 Pro, release build, 50 concurrent connections):

ScenarioReq/secAvg LatencyP99 Latency
Simple workflow (1 task)7,4176.70 ms16.80 ms
Complex workflow (4 tasks)7,0447.00 ms23.50 ms
12 workflows on one channel6,8947.20 ms17.30 ms

Pre-compiled JSONLogic, zero-downtime hot-reload, lock-free reads, SQLite WAL mode, async-first on Tokio.