Skip to content

Memory reference

The memory surface is the widest part of the public SDK. It covers write, recall, lifecycle, planning, task start, sessions, rules, tools, review packs, and a few debugging-oriented endpoints.

What memory means here

Aionis memory turns execution evidence into task start, planning context, lifecycle reuse, review packs, and workflow learning. The memory surface is where execution evidence becomes continuity infrastructure.

Write + recallPlanning contextLifecycle reusePolicy memoryReview material
Write + recall

Evidence intake

Persist real execution evidence, then read it back through structured recall and natural-language recall.

/v1/memory/write
Planning

Context assembly

Assemble planner-facing context, layered recall, and kickoff signals before an agent makes its first move.

/v1/memory/planning/*
Task start

First-action guidance

Turn accumulated evidence into a better opening move for repeated work instead of restarting from scratch.

/v1/memory/kickoff/*
Decision layer

Action retrieval

Expose the explicit next-action retrieval layer with evidence, source kind, and uncertainty instead of hiding it inside a generic recommendation.

/v1/memory/action/retrieval
Lifecycle

Reuse signals

Rehydrate archived nodes and record whether reused memory actually helped when it was brought back.

/v1/memory/archive/*
Forgetting

Cold-memory control

Track semantic forgetting, archive relocation, and differential rehydration instead of treating forgetting as deletion.

semantic_forgetting_v1 + archive_relocation_v1
Review

Review-ready packs

Package continuity and evolution state into structures a human or host can inspect without reading raw stores.

/v1/memory/*review*
Evolution

Policy memory

Persist stable tool policy, read it back through experience intelligence, and govern whether it stays active.

/v1/memory/tools/* + /v1/memory/policies/*
Sessions + helpers

Longer-lived continuity

Carry memory across sessions, packs, delegation records, rules, tools, and pattern-level helper surfaces.

/v1/memory/sessions/*
Operating rule

Read this page in one direction: write evidence first, assemble planning context second, ask for task-start guidance third, then use lifecycle and review paths only when continuity quality and reuse quality start to matter. If you begin at the heavy helper surfaces, the memory model will feel much more complicated than it really is.

trusted recallcandidate kickoffcontested patternsgoverned reviewMemory gets stronger only when evidence, reuse, and review stay connected.

Mental model

Use this mental model for the memory surface:

Memory in Aionis is useful because later runtime paths can build on it.

Core memory families

The public memory surface breaks down into eight practical groups:

  1. write and recall
  2. archive rehydrate and node activation lifecycle
  3. planning and context assembly
  4. task start and experience intelligence
  5. sessions, packs, find, and resolve
  6. rules, tools, patterns, and payload rehydration
  7. policy memory, evolution review, and governance
  8. review packs and delegation-learning support

If you specifically care about lifecycle decay, archive planning, and selective restoration, read Semantic Forgetting after this page.

Decision frame

Think of memory as the layer that holds execution evidence, assembles better startup context, tracks reuse signals, and exposes review-ready continuity state.

How to choose the right call

If you want to...Start with...
Persist new execution evidencememory.write(...)
Search with natural languagememory.recallText(...)
Get planner-facing contextmemory.planningContext(...)
Get the best next first movememory.taskStart(...)
Inspect the explicit next-action retrieval layermemory.actionRetrieval(...)
Inspect heavier workflow and learning statememory.experienceIntelligence(...) or memory.executionIntrospect(...)
Bring archived memory back into active usememory.archive.rehydrate(...)
Record whether reused memory helpedmemory.nodes.activate(...)
Build review-ready statememory.reviewPacks.*

Most-used SDK calls

SDK methodRouteWhat it is for
memory.write(...)POST /v1/memory/writePersist execution evidence into Lite
memory.archive.rehydrate(...)POST /v1/memory/archive/rehydrateBring archived nodes back into warm or hot in Lite
memory.nodes.activate(...)POST /v1/memory/nodes/activateRecord reuse outcome and activation feedback on Lite nodes
memory.anchors.rehydratePayload(...)POST /v1/memory/anchor/payload/rehydrateRestore only the anchor payload detail that is currently needed
memory.recallText(...)POST /v1/memory/recall_textAsk recall using natural language
memory.planningContext(...)POST /v1/memory/planning/contextGet planner-facing recall and kickoff context
memory.contextAssemble(...)POST /v1/memory/context/assembleBuild final context runtime payload
memory.actionRetrieval(...)POST /v1/memory/action/retrievalAsk the runtime for an explicit tool, file, next-action, and evidence-backed retrieval decision
memory.experienceIntelligence(...)POST /v1/memory/experience/intelligenceInspect learned workflow and tool guidance
memory.taskStart(...)POST /v1/memory/kickoff/recommendationGet the best first action for a repeated task
memory.executionIntrospect(...)POST /v1/memory/execution/introspectPull the heavier local introspection surface
memory.tools.feedback(...)POST /v1/memory/tools/feedbackRecord tool outcome and potentially materialize policy memory
memory.reviewPacks.evolution(...)POST /v1/memory/evolution/review-packReview evolution state, policy review, and governance contracts
memory.policies.governanceApply(...)POST /v1/memory/policies/governance/applyRetire or reactivate a persisted policy memory

Minimal write example

ts
await aionis.memory.write({
  tenant_id: "default",
  scope: "repair-flow",
  actor: "docs-example",
  input_text:
    "Patched serializer handling in src/routes/export.ts and verified the export response shape.",
});

What makes a good write:

  1. the scope matches the work you want to improve later
  2. the text records real execution, not generic commentary
  3. the actor and tenant are consistent with later reads

Weak writes produce weak later task starts.

Minimal planning example

ts
const planning = await aionis.memory.planningContext({
  tenant_id: "default",
  scope: "repair-flow",
  query_text: "repair export response serialization bug",
  context: {
    goal: "repair export response serialization bug",
    task_kind: "repair_export",
  },
  tool_candidates: ["bash", "edit", "test"],
  return_layered_context: true,
});

Read these fields first:

  1. kickoff_recommendation
  2. planner_packet
  3. workflow_signals
  4. pattern_signals

Task-start surfaces

If you want the shortest public entrypoint into memory-guided continuity, these are the important calls:

SDK methodWhat comes back
memory.taskStart(...)A compact first_action derived from kickoff recommendation
memory.kickoffRecommendation(...)The raw kickoff response and rationale
memory.actionRetrieval(...)The explicit tool, file, next-action, evidence, and uncertainty layer
memory.experienceIntelligence(...)Workflow, tool, and learning-oriented guidance

Use taskStart first when you want the best first move. Use planningContext first when you want more than one hint and need the runtime to assemble planner-facing context.

If you specifically care about why the runtime picked that move, read Action Retrieval and Uncertainty and Gates next.

If you are integrating memory for the first time, this is the best progression:

  1. memory.write(...)
  2. memory.recallText(...)
  3. memory.planningContext(...)
  4. memory.taskStart(...)
  5. memory.archive.rehydrate(...) and memory.nodes.activate(...) when reuse quality starts to matter

That order helps you understand the surface from evidence ingestion to better startup guidance.

Lifecycle surfaces

Lite now includes the local memory lifecycle routes through the public SDK:

ts
await aionis.memory.archive.rehydrate({
  tenant_id: "default",
  scope: "repair-flow",
  client_ids: ["billing-timeout-repair"],
  target_tier: "warm",
  reason: "bring the archived repair memory back into the active set",
});

await aionis.memory.nodes.activate({
  tenant_id: "default",
  scope: "repair-flow",
  client_ids: ["billing-timeout-repair"],
  outcome: "positive",
  activate: true,
  reason: "the rehydrated node helped complete the repair",
});

These lifecycle calls matter because they let Lite distinguish between:

  • memory that exists
  • memory that should be active again
  • memory that proved useful when reused

They now also feed the runtime's forgetting and relocation summaries, which means the public surfaces can explain why colder memory stayed out of the active set and how it should be restored later.

Semantic forgetting and selective rehydration

The newer forgetting path sits one layer under the lifecycle endpoints:

  1. importance and feedback update lifecycle signals
  2. semantic forgetting decides whether memory should be retained, demoted, archived, or reviewed
  3. archive relocation decides whether payload should move toward colder storage
  4. rehydration surfaces decide whether summary, partial, full, or differential restoration is enough

Use this page for the broad memory model. Use Semantic Forgetting for the dedicated forgetting and rehydration contract.

That is one of the main reasons the runtime can plausibly claim self-evolving continuity rather than static storage.

Continuity carriers and provenance

Lite now treats several runtime records as explicit continuity carriers:

  • handoff
  • session_event
  • session

These are not only recallable records. They can now act as learning inputs that project into workflow memory.

What changed in practice:

  1. the carrier is normalized into execution-native memory
  2. the resulting workflow candidate keeps a distillation_origin
  3. stable workflow promotion preserves that origin instead of erasing it
  4. planning and introspection surfaces expose that provenance directly

The important public signals are:

  • planning_summary.continuity_carrier_summary
  • planning_summary.distillation_signal_summary
  • planner_packet.sections.candidate_workflows
  • executionIntrospect(...).demo_surface.sections.workflows

The most useful origin values to recognize are:

OriginWhat it means
handoff_continuity_carrierThis workflow was learned from structured handoff state
session_event_continuity_carrierThis workflow was learned from repeated session events
session_continuity_carrierThis workflow was learned from session-scoped continuity state
execution_write_projectionThis workflow was projected from execution-native write evidence
replay_learning_episodeThis workflow came from replay learning and playbook reuse

This is one of the most important recent Memory v2 upgrades, because it turns continuity learning from:

  • implicit
  • buried in raw slots
  • hard to trust

into something visible and auditable through the default runtime summaries.

Policy memory and evolution

The newest self-evolving surface in public Lite is policy memory.

This is the loop:

  1. memory.tools.feedback(...) records whether a tool decision worked
  2. stable pattern and workflow evidence can materialize a persisted policy memory
  3. memory.experienceIntelligence(...) can read that memory back as a live policy_contract
  4. memory.executionIntrospect(...) and memory.reviewPacks.evolution(...) can inspect it
  5. memory.policies.governanceApply(...) can retire or reactivate it

If you want the fuller walkthrough, read Policy Memory and Evolution.

Sessions and review-oriented helpers

These surfaces are useful when your host needs continuity state beyond a single task-start answer:

SDK method familyPurpose
memory.sessions.*Create sessions and append local session events
memory.packs.*Export or import local packs
memory.find(...) / memory.resolve(...)Direct local lookup and node resolution
memory.reviewPacks.*Pull continuity or evolution review material
memory.delegationRecords.*Read or write delegation-learning records

Use these helpers when continuity is bigger than one answer:

  • sessions when a task persists over time
  • review packs when a human or host needs review-ready state
  • delegation records when multi-agent learning needs to be kept explicitly

Tools, rules, and patterns

Lite also exposes a narrower local policy-learning loop:

SDK method familyPurpose
memory.tools.select(...)Tool selection decision path
memory.tools.feedback(...)Store tool feedback and distill tool outcomes
memory.rules.state(...)Update local rule state
memory.rules.evaluate(...)Evaluate Lite rules
memory.patterns.suppress(...)Operator stop-loss on a learned pattern
memory.anchors.rehydratePayload(...)Expand an anchor-linked payload

This family is easy to ignore, but it matters when the host needs more control over learned behavior rather than only consuming output recommendations.

Common integration mistakes

Most disappointing first integrations come from one of these:

  1. writing generic notes and expecting strong task-start guidance
  2. mixing unrelated work into the same scope
  3. querying one scope and writing into another
  4. expecting broader orchestration behavior from a local runtime path
  5. treating taskStart as magic instead of as a surface fed by execution evidence

If the runtime feels sparse, the first thing to inspect is usually the shape and quality of the written evidence.

Memory in Lite today

The Lite runtime includes:

  1. write and recall
  2. planning context and task start
  3. archive rehydrate and node activation
  4. review packs
  5. policy memory and governance apply

That means the main continuity loop is available through the public local runtime.

Raw contract sources

When you need exact field names, read:

  1. packages/full-sdk/src/contracts.ts
  2. LOCAL_RUNTIME_API_CAPABILITY_MATRIX.md
  1. SDK Quickstart
  2. Task Start
  3. Contracts and Routes

Self-evolving continuity runtime for agent systems