Memory reference
The memory surface is the widest part of the public SDK. It covers write, recall, lifecycle, planning, task start, sessions, rules, tools, review packs, and a few debugging-oriented endpoints.
Aionis memory turns execution evidence into task start, planning context, lifecycle reuse, review packs, and workflow learning. The memory surface is where execution evidence becomes continuity infrastructure.
Evidence intake
Persist real execution evidence, then read it back through structured recall and natural-language recall.
/v1/memory/writeContext assembly
Assemble planner-facing context, layered recall, and kickoff signals before an agent makes its first move.
/v1/memory/planning/*First-action guidance
Turn accumulated evidence into a better opening move for repeated work instead of restarting from scratch.
/v1/memory/kickoff/*Action retrieval
Expose the explicit next-action retrieval layer with evidence, source kind, and uncertainty instead of hiding it inside a generic recommendation.
/v1/memory/action/retrievalReuse signals
Rehydrate archived nodes and record whether reused memory actually helped when it was brought back.
/v1/memory/archive/*Cold-memory control
Track semantic forgetting, archive relocation, and differential rehydration instead of treating forgetting as deletion.
semantic_forgetting_v1 + archive_relocation_v1Review-ready packs
Package continuity and evolution state into structures a human or host can inspect without reading raw stores.
/v1/memory/*review*Policy memory
Persist stable tool policy, read it back through experience intelligence, and govern whether it stays active.
/v1/memory/tools/* + /v1/memory/policies/*Longer-lived continuity
Carry memory across sessions, packs, delegation records, rules, tools, and pattern-level helper surfaces.
/v1/memory/sessions/*Read this page in one direction: write evidence first, assemble planning context second, ask for task-start guidance third, then use lifecycle and review paths only when continuity quality and reuse quality start to matter. If you begin at the heavy helper surfaces, the memory model will feel much more complicated than it really is.
Mental model
Use this mental model for the memory surface:
Memory in Aionis is useful because later runtime paths can build on it.
Core memory families
The public memory surface breaks down into eight practical groups:
- write and recall
- archive rehydrate and node activation lifecycle
- planning and context assembly
- task start and experience intelligence
- sessions, packs, find, and resolve
- rules, tools, patterns, and payload rehydration
- policy memory, evolution review, and governance
- review packs and delegation-learning support
If you specifically care about lifecycle decay, archive planning, and selective restoration, read Semantic Forgetting after this page.
Think of memory as the layer that holds execution evidence, assembles better startup context, tracks reuse signals, and exposes review-ready continuity state.
How to choose the right call
| If you want to... | Start with... |
|---|---|
| Persist new execution evidence | memory.write(...) |
| Search with natural language | memory.recallText(...) |
| Get planner-facing context | memory.planningContext(...) |
| Get the best next first move | memory.taskStart(...) |
| Inspect the explicit next-action retrieval layer | memory.actionRetrieval(...) |
| Inspect heavier workflow and learning state | memory.experienceIntelligence(...) or memory.executionIntrospect(...) |
| Bring archived memory back into active use | memory.archive.rehydrate(...) |
| Record whether reused memory helped | memory.nodes.activate(...) |
| Build review-ready state | memory.reviewPacks.* |
Most-used SDK calls
| SDK method | Route | What it is for |
|---|---|---|
memory.write(...) | POST /v1/memory/write | Persist execution evidence into Lite |
memory.archive.rehydrate(...) | POST /v1/memory/archive/rehydrate | Bring archived nodes back into warm or hot in Lite |
memory.nodes.activate(...) | POST /v1/memory/nodes/activate | Record reuse outcome and activation feedback on Lite nodes |
memory.anchors.rehydratePayload(...) | POST /v1/memory/anchor/payload/rehydrate | Restore only the anchor payload detail that is currently needed |
memory.recallText(...) | POST /v1/memory/recall_text | Ask recall using natural language |
memory.planningContext(...) | POST /v1/memory/planning/context | Get planner-facing recall and kickoff context |
memory.contextAssemble(...) | POST /v1/memory/context/assemble | Build final context runtime payload |
memory.actionRetrieval(...) | POST /v1/memory/action/retrieval | Ask the runtime for an explicit tool, file, next-action, and evidence-backed retrieval decision |
memory.experienceIntelligence(...) | POST /v1/memory/experience/intelligence | Inspect learned workflow and tool guidance |
memory.taskStart(...) | POST /v1/memory/kickoff/recommendation | Get the best first action for a repeated task |
memory.executionIntrospect(...) | POST /v1/memory/execution/introspect | Pull the heavier local introspection surface |
memory.tools.feedback(...) | POST /v1/memory/tools/feedback | Record tool outcome and potentially materialize policy memory |
memory.reviewPacks.evolution(...) | POST /v1/memory/evolution/review-pack | Review evolution state, policy review, and governance contracts |
memory.policies.governanceApply(...) | POST /v1/memory/policies/governance/apply | Retire or reactivate a persisted policy memory |
Minimal write example
await aionis.memory.write({
tenant_id: "default",
scope: "repair-flow",
actor: "docs-example",
input_text:
"Patched serializer handling in src/routes/export.ts and verified the export response shape.",
});What makes a good write:
- the scope matches the work you want to improve later
- the text records real execution, not generic commentary
- the actor and tenant are consistent with later reads
Weak writes produce weak later task starts.
Minimal planning example
const planning = await aionis.memory.planningContext({
tenant_id: "default",
scope: "repair-flow",
query_text: "repair export response serialization bug",
context: {
goal: "repair export response serialization bug",
task_kind: "repair_export",
},
tool_candidates: ["bash", "edit", "test"],
return_layered_context: true,
});Read these fields first:
kickoff_recommendationplanner_packetworkflow_signalspattern_signals
Task-start surfaces
If you want the shortest public entrypoint into memory-guided continuity, these are the important calls:
| SDK method | What comes back |
|---|---|
memory.taskStart(...) | A compact first_action derived from kickoff recommendation |
memory.kickoffRecommendation(...) | The raw kickoff response and rationale |
memory.actionRetrieval(...) | The explicit tool, file, next-action, evidence, and uncertainty layer |
memory.experienceIntelligence(...) | Workflow, tool, and learning-oriented guidance |
Use taskStart first when you want the best first move. Use planningContext first when you want more than one hint and need the runtime to assemble planner-facing context.
If you specifically care about why the runtime picked that move, read Action Retrieval and Uncertainty and Gates next.
Recommended call order
If you are integrating memory for the first time, this is the best progression:
memory.write(...)memory.recallText(...)memory.planningContext(...)memory.taskStart(...)memory.archive.rehydrate(...)andmemory.nodes.activate(...)when reuse quality starts to matter
That order helps you understand the surface from evidence ingestion to better startup guidance.
Lifecycle surfaces
Lite now includes the local memory lifecycle routes through the public SDK:
await aionis.memory.archive.rehydrate({
tenant_id: "default",
scope: "repair-flow",
client_ids: ["billing-timeout-repair"],
target_tier: "warm",
reason: "bring the archived repair memory back into the active set",
});
await aionis.memory.nodes.activate({
tenant_id: "default",
scope: "repair-flow",
client_ids: ["billing-timeout-repair"],
outcome: "positive",
activate: true,
reason: "the rehydrated node helped complete the repair",
});These lifecycle calls matter because they let Lite distinguish between:
- memory that exists
- memory that should be active again
- memory that proved useful when reused
They now also feed the runtime's forgetting and relocation summaries, which means the public surfaces can explain why colder memory stayed out of the active set and how it should be restored later.
Semantic forgetting and selective rehydration
The newer forgetting path sits one layer under the lifecycle endpoints:
- importance and feedback update lifecycle signals
- semantic forgetting decides whether memory should be retained, demoted, archived, or reviewed
- archive relocation decides whether payload should move toward colder storage
- rehydration surfaces decide whether summary, partial, full, or differential restoration is enough
Use this page for the broad memory model. Use Semantic Forgetting for the dedicated forgetting and rehydration contract.
That is one of the main reasons the runtime can plausibly claim self-evolving continuity rather than static storage.
Continuity carriers and provenance
Lite now treats several runtime records as explicit continuity carriers:
handoffsession_eventsession
These are not only recallable records. They can now act as learning inputs that project into workflow memory.
What changed in practice:
- the carrier is normalized into execution-native memory
- the resulting workflow candidate keeps a
distillation_origin - stable workflow promotion preserves that origin instead of erasing it
- planning and introspection surfaces expose that provenance directly
The important public signals are:
planning_summary.continuity_carrier_summaryplanning_summary.distillation_signal_summaryplanner_packet.sections.candidate_workflowsexecutionIntrospect(...).demo_surface.sections.workflows
The most useful origin values to recognize are:
| Origin | What it means |
|---|---|
handoff_continuity_carrier | This workflow was learned from structured handoff state |
session_event_continuity_carrier | This workflow was learned from repeated session events |
session_continuity_carrier | This workflow was learned from session-scoped continuity state |
execution_write_projection | This workflow was projected from execution-native write evidence |
replay_learning_episode | This workflow came from replay learning and playbook reuse |
This is one of the most important recent Memory v2 upgrades, because it turns continuity learning from:
- implicit
- buried in raw slots
- hard to trust
into something visible and auditable through the default runtime summaries.
Policy memory and evolution
The newest self-evolving surface in public Lite is policy memory.
This is the loop:
memory.tools.feedback(...)records whether a tool decision worked- stable pattern and workflow evidence can materialize a persisted policy memory
memory.experienceIntelligence(...)can read that memory back as a livepolicy_contractmemory.executionIntrospect(...)andmemory.reviewPacks.evolution(...)can inspect itmemory.policies.governanceApply(...)can retire or reactivate it
If you want the fuller walkthrough, read Policy Memory and Evolution.
Sessions and review-oriented helpers
These surfaces are useful when your host needs continuity state beyond a single task-start answer:
| SDK method family | Purpose |
|---|---|
memory.sessions.* | Create sessions and append local session events |
memory.packs.* | Export or import local packs |
memory.find(...) / memory.resolve(...) | Direct local lookup and node resolution |
memory.reviewPacks.* | Pull continuity or evolution review material |
memory.delegationRecords.* | Read or write delegation-learning records |
Use these helpers when continuity is bigger than one answer:
- sessions when a task persists over time
- review packs when a human or host needs review-ready state
- delegation records when multi-agent learning needs to be kept explicitly
Tools, rules, and patterns
Lite also exposes a narrower local policy-learning loop:
| SDK method family | Purpose |
|---|---|
memory.tools.select(...) | Tool selection decision path |
memory.tools.feedback(...) | Store tool feedback and distill tool outcomes |
memory.rules.state(...) | Update local rule state |
memory.rules.evaluate(...) | Evaluate Lite rules |
memory.patterns.suppress(...) | Operator stop-loss on a learned pattern |
memory.anchors.rehydratePayload(...) | Expand an anchor-linked payload |
This family is easy to ignore, but it matters when the host needs more control over learned behavior rather than only consuming output recommendations.
Common integration mistakes
Most disappointing first integrations come from one of these:
- writing generic notes and expecting strong task-start guidance
- mixing unrelated work into the same scope
- querying one scope and writing into another
- expecting broader orchestration behavior from a local runtime path
- treating
taskStartas magic instead of as a surface fed by execution evidence
If the runtime feels sparse, the first thing to inspect is usually the shape and quality of the written evidence.
Memory in Lite today
The Lite runtime includes:
- write and recall
- planning context and task start
- archive rehydrate and node activation
- review packs
- policy memory and governance apply
That means the main continuity loop is available through the public local runtime.
Raw contract sources
When you need exact field names, read: