Replay and playbooks
Replay is the producer side of Aionis Runtime. It records successful execution and turns that execution into reusable operating knowledge.
Without replay, continuity stays descriptive. Replay is what lets Aionis turn successful execution into something the runtime can promote, dispatch, validate, and reuse later.
Capture the execution
Open a replay run, record intent and outcome at the step level, and close the run with clear result semantics.
/v1/memory/replay/run/*Produce a playbook
Turn a completed run into a reusable playbook artifact instead of leaving it as one finished event.
/v1/memory/replay/playbooks/compile_from_runDecide reuse status
Move a playbook through candidate and promotion decisions before relying on it as stable operating knowledge.
/v1/memory/replay/playbooks/promoteRun or dispatch
Execute playbooks locally or dispatch them through the runtime when reuse is ready to be exercised.
/v1/memory/replay/playbooks/runPatch the workflow
Repair a playbook when reuse reveals drift, then decide whether the repaired version should advance.
/v1/memory/replay/playbooks/repairGate the repair
Use replay repair review when changes need an explicit trust decision before they are reused again.
/v1/memory/replay/playbooks/repair/reviewMental model
Replay is what turns recorded execution into reusable runtime behavior.
Do not treat every successful run as a ready-made workflow. Replay only becomes valuable when the runtime can distinguish one clean success from a stable pattern. Record clearly first, compile second, validate third, and promote only when the behavior is worth repeating.
Replay run lifecycle
The base replay flow is:
- start a run
- record step before
- record step after
- end the run
- fetch the completed run
| SDK method | Route |
|---|---|
memory.replay.run.start(...) | POST /v1/memory/replay/run/start |
memory.replay.step.before(...) | POST /v1/memory/replay/step/before |
memory.replay.step.after(...) | POST /v1/memory/replay/step/after |
memory.replay.run.end(...) | POST /v1/memory/replay/run/end |
memory.replay.run.get(...) | POST /v1/memory/replay/runs/get |
What each replay phase is doing
| Phase | Why it exists |
|---|---|
run.start | Open a durable execution record for the run |
step.before | Record the intended action and preconditions |
step.after | Record what actually happened |
run.end | Mark the overall outcome and summary |
run.get | Inspect the recorded execution after the fact |
Minimal replay example
await aionis.memory.replay.run.start({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
run_id: "repair-run-1",
goal: "repair export response serialization bug",
});
await aionis.memory.replay.step.before({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
run_id: "repair-run-1",
step_index: 1,
tool_name: "edit",
tool_input: { file_path: "src/routes/export.ts" },
});
await aionis.memory.replay.step.after({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
run_id: "repair-run-1",
step_index: 1,
status: "success",
output_signature: {
kind: "patch_result",
summary: "patched export serializer handling",
},
});To make the run reusable, end it explicitly:
await aionis.memory.replay.run.end({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
run_id: "repair-run-1",
status: "success",
summary: "patched export serializer and validated the route output",
});Playbook operations
Once a run ends, the important next step is turning it into a playbook.
| SDK method | Route | Purpose |
|---|---|---|
memory.replay.playbooks.compileFromRun(...) | POST /v1/memory/replay/playbooks/compile_from_run | Build a playbook from a completed replay run |
memory.replay.playbooks.get(...) | POST /v1/memory/replay/playbooks/get | Fetch one playbook |
memory.replay.playbooks.candidate(...) | POST /v1/memory/replay/playbooks/candidate | Evaluate candidate state |
memory.replay.playbooks.promote(...) | POST /v1/memory/replay/playbooks/promote | Promote a playbook version |
memory.replay.playbooks.repair(...) | POST /v1/memory/replay/playbooks/repair | Patch a playbook definition |
memory.replay.playbooks.run(...) | POST /v1/memory/replay/playbooks/run | Execute a playbook locally |
memory.replay.playbooks.dispatch(...) | POST /v1/memory/replay/playbooks/dispatch | Dispatch a playbook run |
memory.replay.playbooks.repairReview(...) | POST /v1/memory/replay/playbooks/repair/review | Lite replay repair review subset |
Recommended progression
Use replay in this order:
- record one successful run cleanly
- compile a playbook from that run
- inspect candidate and promotion state
- promote when the playbook is stable enough
- run or dispatch the promoted playbook
- repair and review when reuse needs adjustment
That is the shortest path from one-off success to reusable runtime behavior.
Replay is the producer side of execution memory. It captures the sequence, playbooks shape that sequence into reusable structure, review gates decide trust, and future task starts can then begin from something stronger than a fresh guess. That whole loop is the reason replay belongs near the center of Aionis rather than at the edge.
Compile and promote example
await aionis.memory.replay.playbooks.compileFromRun({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
run_id: "repair-run-1",
playbook_id: "repair-export",
name: "Repair export serializer",
});
await aionis.memory.replay.playbooks.promote({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
playbook_id: "repair-export",
target_status: "active",
note: "validated on repeated export serializer repairs",
});Run vs dispatch vs repair
| Call | Use it when... |
|---|---|
playbooks.run(...) | you want to execute a playbook directly in Lite |
playbooks.dispatch(...) | you want the runtime to dispatch execution through the playbook path |
playbooks.repair(...) | the playbook needs structural adjustment |
playbooks.repairReview(...) | the repair needs an explicit review decision |
Why playbooks matter
Without replay, memory only describes what happened. With playbooks, the runtime can start to reuse how work got done.
That is the key loop:
successful execution -> replay run -> playbook -> stable workflow guidance -> better future task startThat is why replay is foundational to the "self-evolving" claim. Replay is how successful behavior becomes an asset instead of a finished event.
Replay provenance and stable workflow anchors
Replay now does a better job of preserving learning provenance when playbooks stabilize.
Two details matter:
- replay-learning episodes already project candidate workflows with explicit
distillation_origin = "replay_learning_episode" - stable replay workflow anchors now preserve that provenance through promotion and later normalization instead of dropping it
That means a replay-derived stable workflow is no longer just:
- a stable anchor
- a promoted playbook
It is also an inspectable answer to:
what kind of learning signal created this workflow?
In practice, that provenance now stays visible through:
execution_native_v1.distillationmemory.executionIntrospect(...)planner packetand demo workflow lines
This matters because replay now preserves where reusable structure came from as it becomes stable guidance.
How replay interacts with the rest of the runtime
Replay is tightly connected to the other public surfaces:
- memory and task start can benefit from promoted playbooks later
- automation can execute playbook-shaped graphs locally
- review runtime can review repaired playbooks before reuse
- handoff can capture state around incomplete or partial runs
Replay is best understood as part of the continuity loop rather than as a standalone logging surface.
Common mistakes
Replay integration is usually weak for one of these reasons:
- steps are recorded too loosely to reconstruct what mattered
- runs end without clear success/failure semantics
- teams compile playbooks before a run pattern is actually stable
- replay is used as audit history only, not as a path to reuse
If replay is not changing future task starts or automation behavior, you are probably recording history without closing the reuse loop.
Replay in Lite today
Lite includes:
- replay run lifecycle
- playbook compilation
- candidate and promotion flow
- repair and repair review
- local playbook execution and dispatch
That means replay is already part of the public runtime path for evaluation and integration.