Keyboard shortcuts

Press or to navigate between chapters

Press S or / to search in the book

Press ? to show this help

Press Esc to hide this help

Use Cases & Patterns

Real-world examples showing how AI generates Orion workflows from natural language. Every example follows the same pattern: describe what you need → AI generates the workflow → create a channel → send data → get results.

E-Commerce Order Classification

Classify orders into tiers and compute discounts.

AI prompt:

Create a workflow for the "orders" channel that:
1. Parses the payload into "order"
2. Assigns tiers based on amount:
   - VIP: amount >= 500, discount 15%
   - Premium: amount 100-500, discount 5%
   - Standard: amount < 100, no discount

Generated workflow:

{
  "name": "Order Classification",
  "condition": true,
  "tasks": [
    { "id": "parse", "name": "Parse Payload", "function": {
        "name": "parse_json", "input": { "source": "payload", "target": "order" }
    }},
    { "id": "vip_tier", "name": "Set VIP Tier",
      "condition": { ">=": [{ "var": "data.order.amount" }, 500] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.order.tier", "logic": "vip" },
        { "path": "data.order.discount_pct", "logic": 15 }
      ]}}
    },
    { "id": "premium_tier", "name": "Set Premium Tier",
      "condition": { "and": [
        { ">=": [{ "var": "data.order.amount" }, 100] },
        { "<": [{ "var": "data.order.amount" }, 500] }
      ]},
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.order.tier", "logic": "premium" },
        { "path": "data.order.discount_pct", "logic": 5 }
      ]}}
    },
    { "id": "standard_tier", "name": "Set Standard Tier",
      "condition": { "<": [{ "var": "data.order.amount" }, 100] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.order.tier", "logic": "standard" },
        { "path": "data.order.discount_pct", "logic": 0 }
      ]}}
    }
  ]
}

Send data:

curl -s -X POST http://localhost:8080/api/v1/data/orders \
  -H "Content-Type: application/json" \
  -d '{ "data": { "amount": 750, "product": "Diamond Ring" } }'

Response:

{
  "status": "ok",
  "data": {
    "order": { "amount": 750, "product": "Diamond Ring", "tier": "vip", "discount_pct": 15 }
  },
  "errors": []
}

Key patterns: Task-level conditions, computed output fields.

IoT Sensor Alert Classification

Classify sensor readings into severity levels using range-based conditions.

AI prompt:

Create a workflow for the "sensors" channel that classifies temperature readings:
- Critical: temperature > 90 or below 0, set alert flag
- Warning: temperature 70-90, set alert flag
- Normal: temperature 0-70, no alert
Parse the payload into "reading" and set severity and alert fields.

Generated workflow:

{
  "name": "Sensor Alert Pipeline",
  "condition": true,
  "tasks": [
    { "id": "parse", "name": "Parse Payload", "function": {
        "name": "parse_json", "input": { "source": "payload", "target": "reading" }
    }},
    { "id": "critical", "name": "Mark Critical",
      "condition": { "or": [
        { ">": [{ "var": "data.reading.temperature" }, 90] },
        { "<": [{ "var": "data.reading.temperature" }, 0] }
      ]},
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.reading.severity", "logic": "critical" },
        { "path": "data.reading.alert", "logic": true }
      ]}}
    },
    { "id": "warning", "name": "Mark Warning",
      "condition": { "and": [
        { ">": [{ "var": "data.reading.temperature" }, 70] },
        { "<=": [{ "var": "data.reading.temperature" }, 90] }
      ]},
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.reading.severity", "logic": "warning" },
        { "path": "data.reading.alert", "logic": true }
      ]}}
    },
    { "id": "normal", "name": "Mark Normal",
      "condition": { "and": [
        { ">=": [{ "var": "data.reading.temperature" }, 0] },
        { "<=": [{ "var": "data.reading.temperature" }, 70] }
      ]},
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.reading.severity", "logic": "normal" },
        { "path": "data.reading.alert", "logic": false }
      ]}}
    }
  ]
}

Send data:

curl -s -X POST http://localhost:8080/api/v1/data/sensors \
  -H "Content-Type: application/json" \
  -d '{ "data": { "temperature": 80, "sensor_id": "SENSOR-42" } }'

Response:

{
  "status": "ok",
  "data": {
    "reading": { "temperature": 80, "sensor_id": "SENSOR-42", "severity": "warning", "alert": true }
  },
  "errors": []
}
InputSeverityAlert
temperature: 45normalfalse
temperature: 80warningtrue
temperature: 95criticaltrue
temperature: -5criticaltrue

Key patterns: Range-based classification with and/or conditions, boolean flags.

Webhook Payload Transformation

Normalize incoming webhook payloads from different providers into a consistent internal schema.

AI prompt:

Create a workflow for the "webhooks" channel that normalizes webhook payloads from any provider:
- Map "origin" to "source"
- Map "type" to "event_type"
- Map "body" to "payload"
- Add a "processed" flag set to true
Output should be under data.normalized.

Generated workflow:

{
  "name": "Webhook Transform Pipeline",
  "condition": true,
  "tasks": [
    { "id": "parse", "name": "Parse Payload", "function": {
        "name": "parse_json", "input": { "source": "payload", "target": "event" }
    }},
    { "id": "normalize", "name": "Normalize Schema", "function": {
        "name": "map", "input": { "mappings": [
          { "path": "data.normalized.source", "logic": { "var": "data.event.origin" } },
          { "path": "data.normalized.event_type", "logic": { "var": "data.event.type" } },
          { "path": "data.normalized.payload", "logic": { "var": "data.event.body" } },
          { "path": "data.normalized.processed", "logic": true }
        ]}
    }}
  ]
}

Send data:

curl -s -X POST http://localhost:8080/api/v1/data/webhooks \
  -H "Content-Type: application/json" \
  -d '{ "data": { "origin": "github", "type": "push", "body": {"ref": "refs/heads/main"} } }'

Response:

{
  "status": "ok",
  "data": {
    "normalized": { "source": "github", "event_type": "push", "payload": {"ref": "refs/heads/main"}, "processed": true }
  },
  "errors": []
}

Missing optional fields produce null, no errors. This makes the pipeline safe for partial payloads from different webhook providers.

Key patterns: Schema mapping with var, null-safe field access, static enrichment.

Notification Routing

Route notifications to different delivery channels based on severity.

AI prompt:

Create a workflow for the "notifications" channel that routes by severity:
- Log all notifications
- Send email for anything except "low" severity
- Send SMS only for "high" and "critical" severity
Parse the payload into "notification".

Generated workflow:

{
  "name": "Notification Router",
  "condition": true,
  "tasks": [
    { "id": "parse", "name": "Parse Payload", "function": {
        "name": "parse_json", "input": { "source": "payload", "target": "notification" }
    }},
    { "id": "log_all", "name": "Log All Notifications", "function": {
        "name": "map", "input": { "mappings": [
          { "path": "data.notification.logged", "logic": true }
        ]}
    }},
    { "id": "email", "name": "Send Email",
      "condition": { "!=": [{ "var": "data.notification.severity" }, "low"] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.notification.email_sent", "logic": true }
      ]}}
    },
    { "id": "sms", "name": "Send SMS for High/Critical",
      "condition": { "in": [{ "var": "data.notification.severity" }, ["high", "critical"]] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.notification.sms_sent", "logic": true }
      ]}}
    }
  ]
}

Send data:

curl -s -X POST http://localhost:8080/api/v1/data/notifications \
  -H "Content-Type: application/json" \
  -d '{ "data": { "message": "Disk usage at 92%", "severity": "high" } }'

Response:

{
  "status": "ok",
  "data": {
    "notification": { "message": "Disk usage at 92%", "severity": "high", "logged": true, "email_sent": true, "sms_sent": true }
  },
  "errors": []
}
SeverityLoggedEmailSMS
lowyesnono
mediumyesyesno
highyesyesyes
criticalyesyesyes

In production, replace the map tasks with http_call tasks pointing to your email and SMS connectors.

Key patterns: Task-level condition gating, in operator for set membership, progressive pipeline.

Compliance Risk Classification

Classify transactions by risk level and use dry-run testing to verify workflows before activating them.

AI prompt:

Create a workflow for the "compliance" channel that classifies transaction risk:
- High risk: amount > 10000, requires manual review
- Normal risk: amount <= 10000, no review needed
Parse the payload into "txn".

Generated workflow:

{
  "name": "Risk Classifier",
  "condition": true,
  "tasks": [
    { "id": "parse", "name": "Parse Payload", "function": {
        "name": "parse_json", "input": { "source": "payload", "target": "txn" }
    }},
    { "id": "high_risk", "name": "Flag High Risk",
      "condition": { ">": [{ "var": "data.txn.amount" }, 10000] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.txn.risk_level", "logic": "high" },
        { "path": "data.txn.requires_review", "logic": true }
      ]}}
    },
    { "id": "normal_risk", "name": "Normal Risk",
      "condition": { "<=": [{ "var": "data.txn.amount" }, 10000] },
      "function": { "name": "map", "input": { "mappings": [
        { "path": "data.txn.risk_level", "logic": "normal" },
        { "path": "data.txn.requires_review", "logic": false }
      ]}}
    }
  ]
}

Dry-run before going live:

curl -s -X POST http://localhost:8080/api/v1/admin/workflows/<workflow-id>/test \
  -H "Content-Type: application/json" \
  -d '{"data": {"amount": 50000, "currency": "USD"}}'
{
  "matched": true,
  "trace": { "steps": [
    { "task_id": "parse", "result": "executed" },
    { "task_id": "high_risk", "result": "executed" },
    { "task_id": "normal_risk", "result": "skipped" }
  ]},
  "output": { "txn": { "amount": 50000, "currency": "USD", "risk_level": "high", "requires_review": true } }
}

The trace shows exactly which tasks ran and which were skipped. Verify the logic is correct before a single real transaction flows through.

Key patterns: Dry-run verification, execution trace inspection, regulatory workflow.

AI Workflow & CI/CD

AI writes workflows, not services. Instead of generating microservices that need their own governance, LLMs generate Orion workflows: constrained JSON that the platform validates, versions, and monitors automatically.

Prompt Templates

Structure your LLM prompts to produce valid Orion workflows. Here’s a reusable system prompt:

You generate Orion workflows in JSON format. Workflows have:
- name, condition (JSONLogic or true), continue_on_error (optional boolean)
- tasks: array of { id, name, condition (optional, JSONLogic), function: { name, input } }
- Every workflow starts with a parse_json task: { "name": "parse_json", "input": { "source": "payload", "target": "<entity>" } }
- Use "map" function with "mappings" array for transforms. Each mapping has "path" (dot notation) and "logic" (value or JSONLogic).
- Use "http_call" with "connector" (by name) for external API calls. Do not embed URLs or credentials in workflows.
- Use "channel_call" with "channel" (by name) for in-process inter-channel invocation.
- Task conditions use { "var": "data.<entity>.<field>" } to reference parsed data.

Output ONLY the JSON workflow. No explanation.

Validation Pipeline

Every AI-generated workflow should go through this pipeline before reaching production:

  1. Generate: use your LLM with the prompt template above
  2. Validate: POST /api/v1/admin/workflows/validate to check structure
  3. Create as draft: POST /api/v1/admin/workflows (workflows are created as drafts by default, not loaded into the engine)
  4. Dry-run: POST /api/v1/admin/workflows/{id}/test with representative test data
  5. Check the trace: verify the right tasks ran, the right ones were skipped, and output matches expectations
  6. Activate: PATCH /api/v1/admin/workflows/{id}/status with "status": "active"

CI/CD Pipeline

Integrate AI workflow generation into your deployment pipeline. Workflows are JSON files that version, diff, and review like any other config.

AI generates workflow → commit as JSON → CI runs dry-run → review → import

GitHub Actions example:

name: Validate AI Workflows
on:
  pull_request:
    paths: ['workflows/**/*.json']

jobs:
  validate:
    runs-on: ubuntu-latest
    services:
      orion:
        image: ghcr.io/goplasmatic/orion:latest
        ports: ['8080:8080']
    steps:
      - uses: actions/checkout@v4

      - name: Import workflows (as drafts)
        run: |
          for file in workflows/**/*.json; do
            curl -s -X POST http://localhost:8080/api/v1/admin/workflows \
              -H "Content-Type: application/json" \
              -d @"$file"
          done

      - name: Dry-run test cases
        run: |
          for test in workflows/tests/**/*.json; do
            WORKFLOW_ID=$(jq -r '.workflow_id' "$test")
            DATA=$(jq -c '.data' "$test")
            EXPECTED=$(jq -c '.expected_output' "$test")

            RESULT=$(curl -s -X POST \
              "http://localhost:8080/api/v1/admin/workflows/${WORKFLOW_ID}/test" \
              -H "Content-Type: application/json" \
              -d "$DATA")

            OUTPUT=$(echo "$RESULT" | jq -c '.output')
            if [ "$OUTPUT" != "$EXPECTED" ]; then
              echo "FAIL: $test"
              echo "Expected: $EXPECTED"
              echo "Got: $OUTPUT"
              exit 1
            fi
          done

      - name: Deploy to production
        if: github.event_name == 'push' && github.ref == 'refs/heads/main'
        run: |
          for file in workflows/**/*.json; do
            curl -s -X POST "${{ secrets.ORION_URL }}/api/v1/admin/workflows" \
              -H "Content-Type: application/json" \
              -d @"$file"
          done

Test case format: store test cases alongside workflows:

workflows/
  fraud-detection.json       # The workflow
  tests/
    fraud-high-risk.json     # Test case
    fraud-clear.json         # Test case

Each test case:

{
  "workflow_id": "fraud-detection",
  "data": { "data": { "amount": 15000, "country": "US" } },
  "expected_output": { "order": { "amount": 15000, "risk": "high", "requires_review": true } }
}

Safety Guardrails

AI-generated workflows get the same governance as hand-written ones:

  • Version history: every workflow change is recorded. Roll back if an AI-generated workflow misbehaves.
  • Draft status: workflows are created as draft by default and are not loaded into the engine until explicitly activated.
  • Dry-run before activate: test with representative data and inspect the full execution trace.
  • Audit trail: every workflow version is recorded in the workflows table with incrementing version numbers.
  • Connectors isolate secrets: AI generates workflows that reference connector names, never credentials.

Common Workflow Patterns

The parse-then-process pattern

Every workflow that reads input data must start with parse_json. Without it, task conditions referencing data.X see empty context.

{
  "tasks": [
    { "id": "parse", "function": { "name": "parse_json", "input": { "source": "payload", "target": "order" } } },
    { "id": "process", "condition": { ">": [{ "var": "data.order.total" }, 100] }, "function": { "..." : "..." } }
  ]
}

Task-level vs workflow-level conditions

  • Workflow-level condition: determines whether the entire workflow matches. Set to true for workflows that always run.
  • Task-level condition: determines whether a specific task within a matched workflow executes. Use for branching logic within a pipeline.
{
  "condition": true,
  "tasks": [
    { "id": "always", "function": { "..." : "..." } },
    { "id": "conditional", "condition": { ">": [{ "var": "data.amount" }, 500] }, "function": { "..." : "..." } }
  ]
}

External API calls with connectors

Keep credentials in connectors, reference them by name in workflows:

{
  "tasks": [
    { "id": "parse", "function": { "name": "parse_json", "input": { "source": "payload", "target": "event" } } },
    { "id": "notify", "function": { "name": "http_call", "input": {
        "connector": "slack-webhook",
        "method": "POST",
        "body_logic": { "var": "data.event" }
    }}}
  ]
}

Inter-channel composition with channel_call

Invoke another channel’s workflow in-process for service composition:

{
  "tasks": [
    { "id": "parse", "function": { "name": "parse_json", "input": { "source": "payload", "target": "order" } } },
    { "id": "enrich", "function": { "name": "channel_call", "input": {
        "channel": "customer-lookup",
        "data_logic": { "var": "data.order.customer_id" },
        "response_path": "data.customer"
    }}},
    { "id": "process", "condition": { "==": [{ "var": "data.customer.tier" }, "vip"] }, "function": { "..." : "..." } }
  ]
}