Most AI agent frameworks treat cognition as a black box: prompt in, response out. ACT-R (Adaptive Control of Thought - Rational) is different. It's a cognitive architecture originally built to model human reasoning, and I've been adapting it to give autonomous agents something they desperately lack: structured, persistent, learnable cognition.
Core Components
Declarative Memory
Declarative memory stores knowledge as vectorized chunks, but unlike a simple vector database, each chunk carries activation dynamics. Activation is a numerical value reflecting how relevant a memory is right now, computed from two factors: how recently it was accessed and how frequently it's been used. Retrieval isn't a naive similarity search. It's a competition. When the system needs a memory, all matching chunks compete based on their activation levels, and only the winner surfaces. This means the system naturally prioritizes well-established, recently-relevant knowledge without manual curation.
Spreading activation adds another layer. When a chunk is active, it sends activation to related chunks through associative links. Recall one concept and related concepts become easier to retrieve. This is how context shapes memory retrieval, the same way thinking about "workshop" makes "tools" and "projects" more accessible in your own mind.
Procedural Memory
Procedural memory holds production rules: IF-THEN patterns that govern behavior. When conditions in the current context match a rule's IF clause, that rule becomes a candidate to fire. But multiple rules can match simultaneously, so the system uses utility-based conflict resolution. Each production has a learned utility score. Rules that led to successful outcomes in the past accumulate higher utility. Rules that led to failure decay. The agent doesn't just follow instructions; it learns which behavioral patterns work.
Working Memory
Working memory is a set of priority-buffered slots that hold the agent's active reasoning context. Not everything in the knowledge base matters at any given moment. Working memory maintains what's currently relevant: the active goal, the most recently retrieved memory, intermediate reasoning results, and perceptual input. Each buffer has capacity limits, forcing the system to stay focused rather than drowning in context.
The Activation Equation
The heart of ACT-R's retrieval mechanism is the activation equation:
Activation(i) = Base-Level(i) + Spreading(i) + Noise
Base-Level(i) = ln( SUM[ t_j ^ -d ] )
where t_j = time since j-th access, d = decay rate (~0.5)
Spreading(i) = SUM[ W_k * S_ki ]
where W_k = source activation from buffer k
S_ki = strength of association from source k to chunk i
Noise = logistic distribution sample (stochastic variability) Base-level activation captures recency and frequency. A memory accessed many times recently has high base-level activation. One accessed once six months ago has almost none. The logarithmic decay means memories fade but never fully disappear.
Spreading activation captures context. Whatever is currently in the buffers sends activation to associated chunks. If the agent is reasoning about network infrastructure, memories related to networking get a boost.
Noise prevents the system from being deterministic. Sometimes a lower-activation memory wins retrieval. This produces the kind of creative, non-obvious associations that pure ranking systems miss.
Why This Matters for AI Agents
The current landscape of AI agents falls into two camps, and both have critical flaws:
- Hardcoded agents — Every behavior is manually specified. They're reliable but brittle. Change the environment and they shatter. They can't learn.
- Pure LLM agents — Powerful reasoning but fundamentally stateless. Each session starts from zero. They don't accumulate knowledge. They don't develop preferences. They don't improve.
ACT-R bridges these: persistent learned knowledge in declarative memory, learned behavioral patterns in procedural memory, and maintained context in working memory. The agent accumulates experience. Productions that work get reinforced. Memories that matter stay accessible. Context carries forward.
Implementation in Aegis Falls
The Aegis Falls system implements ACT-R's core mechanisms on top of practical infrastructure. Declarative memory chunks live in PostgreSQL, indexed for both semantic similarity (via embeddings) and keyword search. Retrieval runs a hybrid query that combines vector cosine similarity with traditional text matching, then applies the activation equation to rank results.
The production system maintains a rule set with utility scores. During conflict resolution, matching rules are scored by utility plus noise, and the highest-scoring rule fires. Utilities update after outcomes via a reinforcement learning step.
Working memory buffers are maintained in application state with periodic persistence. Goal buffers track the current objective hierarchy. The retrieval buffer holds the most recently fetched chunk. The imaginal buffer carries intermediate reasoning artifacts.
$ python3 actr_engine.py --show-buffers [ACT-R Buffer State] goal: active | depth: 2 | focus: primary-objective retrieval: loaded | activation: 4.72 | decay: 0.031/hr imaginal: active | slots: 3/8 occupied visual: idle | last update: 12m ago motor: idle [Production Match] candidates: 4 rules matched current state selected: rule-0147 (utility: 0.83, noise: +0.04) firing... action dispatched to motor buffer [Memory Stats] total chunks: ~120 active (>0.5): ~40 retrievals: active mean activation: 1.24
Beyond Traditional Architectures
What makes ACT-R compelling isn't any single feature. It's that the components work together as a system. Spreading activation means retrieval is context-sensitive. Utility learning means behavior adapts. Activation decay means the system naturally forgets irrelevant information without manual cleanup. Buffer limits mean the system stays focused.
This isn't about making the model smarter. It's about building the right cognitive infrastructure around the model so that intelligence can accumulate over time. The model provides reasoning capability. ACT-R provides the architecture for that reasoning to persist, adapt, and improve.
Related
- The Aegis Falls Memory Layer — how ACT-R works in practice within the Aegis Falls system
- Aegis Falls Architecture — the complete system that ACT-R memory powers
- OpenClaw — the agent platform built on this architecture