The Lacuna Engine: Inverted Expert Systems and Expertise-by-Absence

John Mobley Jr. MASCOM Research — February 2026

1. Introduction

1.1 The Knowledge Acquisition Bottleneck

Expert systems — the crown jewel of 1980s AI — encode domain knowledge as production rules: IF patient presents fever AND cough AND recent travel THEN consider tropical infection. MYCIN, DENDRAL, and their descendants achieved remarkable performance in narrow domains by crystallizing what human experts know.

But they all hit the same wall: the knowledge acquisition bottleneck. Extracting expertise from humans is slow, expensive, and fundamentally incomplete. Experts struggle to articulate tacit knowledge. Rules interact in unexpected ways. The knowledge base is always one edge case away from producing nonsensical advice. R1/XCON, Digital Equipment Corporation’s legendary configuration system, required a team of knowledge engineers working continuously just to keep up with product changes. When the team was dissolved, the system calcified and died.

The deeper problem is philosophical. Knowledge-based systems attempt to enumerate what is true. But truth is infinite; the space of things an expert knows is unbounded, context-dependent, and often contradictory. Asking “what do you know?” is the wrong question.

1.2 The Inversion

What if we asked the opposite question?

Instead of “what does the expert know?”, ask: “what did the system fail to know?”

Every failure is an observation. Every error message is a signal. Every gap between expected and actual behavior is a data point. And unlike expertise — which is diffuse, tacit, and hard to extract — failure is concrete, timestamped, and machine-readable.

This is the core insight of the Lacuna Engine. A lacuna (Latin: gap, pit, missing piece) is a specific, identified absence of knowledge that caused or contributed to a system failure. The Lacuna Engine collects lacunae from operational experience, organizes them by situation, and compiles them into prompts that prevent their recurrence.

The metaphor is sculptural. Michelangelo claimed he didn’t create David — he merely removed the marble that wasn’t David. The Lacuna Engine doesn’t create expertise — it merely removes the failures that aren’t expertise. What remains is indistinguishable from the real thing.

2. The Inversion in Detail

2.1 From Knowledge Rules to Failure Fragments

A traditional expert system rule:

IF deploying_to_cloudflare AND using_r2_storage
THEN use_flag("--remote")
BECAUSE without_remote_flag_writes_go_to_local_dev_r2

The equivalent lacuna:

LACUNA: System deployed to R2 without --remote flag.
        Writes went to local dev R2 silently.
        Upload reported "complete" either way.
FRAGMENT: "Always use --remote with wrangler r2.
           Verify with curl -D - https://{host}{path}"
SOURCE: deployment_failure_2026-02-15

Both encode the same knowledge. But the lacuna was discovered automatically from a failure event, while the rule required a human to anticipate the edge case. The lacuna is also richer — it carries the context of discovery, the verification method, and the source of the lesson.

2.2 Why Failures Are Better Than Knowledge

Property Knowledge (Expert System) Lacuna (Failure Fragment)
Discovery Manual extraction from experts Automatic from system logs
Specificity Often over-general Precisely scoped to the failure
Verifiability Hard to test in isolation Has a concrete reproduction case
Freshness Stales as domain evolves Continuously generated from operation
Completeness Always missing something Covers exactly what failed
Confidence Uncertain (expert may be wrong) Empirical (it definitely failed)

The key asymmetry: you can’t enumerate all the things an expert knows, but you can enumerate all the things the system failed at — because failures leave traces.

2.3 The Expert Systems Irony

Traditional expert systems tried to capture what makes experts expert. But what actually distinguishes experts from novices is not knowledge per se — it’s the absence of naive mistakes. An expert chess player doesn’t see brilliant moves that novices miss; they avoid blundering moves that novices make (de Groot, 1965; Chase & Simon, 1973).

Expertise is less about knowing what to do and more about knowing what not to do. The Lacuna Engine encodes this directly.

3. Formal Model

3.1 Definitions

Let: - S be the space of all possible situations (task descriptions, contexts, system states) - F be a failure event: a tuple (s, a, e) where s ∈ S is the situation, a is the action taken, and e is the error that resulted - L ⊂ S × Knowledge be the lacuna space: the set of (situation, missing-knowledge) pairs that have caused failures - Φ be the fragment library: a set of text fragments, each associated with categories, trigger conditions, and a Bayesian effectiveness score - C: L × S → P be the compilation function: given accumulated lacunae and a current situation, produce a prompt P

3.2 Fragment Effectiveness

Each fragment φ ∈ Φ has an effectiveness score updated via Bayesian inference:

effectiveness(φ) = (s + 1) / (s + f + 2)

where s = number of successful outcomes when φ was included in the compiled prompt, and f = number of failures. This is the posterior mean of a Beta(s+1, f+1) distribution with a uniform Beta(1,1) prior.

New fragments start at effectiveness = 0.5 (maximum uncertainty). Fragments that consistently appear in successful compilations rise toward 1.0. Fragments that appear in failures sink toward 0.0 and are eventually retired.

3.3 The Compilation Function

Given situation s ∈ S:

  1. Classify: Map s to a situation vector v = (primary_category, secondary_categories, error_type, keywords)
  2. Match: Find fragments whose trigger conditions match v
  3. Score: For each candidate fragment φ, compute:
score(φ, s) = 0.4 · trigger_relevance(φ, s)
            + 0.4 · effectiveness(φ)
            + 0.2 · recency(φ)
  1. Select: Take the top-N fragments by score (N = 12)
  2. Order: Sort by fragment type: setup → constraint → domain → recovery → meta
  3. Assemble: Concatenate into prompt P with situation header

3.4 The Decision Tree Cache

Frequently-encountered situations build up a cache of pre-compiled optimal fragment sets. This is the “compiled knowledge” — analogous to how a chess engine caches evaluated positions.

cache: hash(situation_vector) → fragment_ids

The cache is periodically rebuilt from compilations with the best outcome scores per situation hash.

4. Architecture

The Lacuna Engine is implemented as five cooperating components:

4.1 SituationClassifier

Maps raw task descriptions to structured situation vectors. Uses keyword matching against a domain taxonomy (18 categories derived from MASCOM’s operational history), file path hinting (.py → implementation, .metal → training), and error pattern detection via compiled regex patterns.

The classifier is intentionally simple. Situation classification doesn’t need to be perfect — it only needs to be good enough to select relevant fragments. The Bayesian effectiveness scores do the fine-tuning: irrelevant fragments that sneak through classification will accumulate failures and be retired.

4.2 FragmentLibrary

The persistent store of all discovered lacunae, encoded as prompt fragments. Each fragment carries:

4.3 PromptCompiler

The core engine. Takes a task description and optional context (file paths, error messages, system state), classifies the situation, queries the fragment library, scores and selects fragments, and assembles them into a prompt optimized for the situation.

The compiler respects hard constraints: - Maximum 12 fragments per compilation (cognitive load management) - Maximum 4000 characters (prompt budget management) - Type ordering ensures structural coherence (setup before constraints before domain knowledge before recovery procedures before meta-instructions)

4.4 OutcomeLearner

The feedback loop. After a session completes, the OutcomeLearner: 1. Scores the session outcome 2. Identifies which fragments were included in the session’s compiled prompt 3. Updates each fragment’s effectiveness score based on success/failure 4. Discovers new fragments from system activity (ouroboros gaps, refractive will patterns, healing cycle fixes)

This is where the system learns. Good fragments are reinforced. Bad fragments are weakened. New lacunae are continuously discovered and added to the library.

4.5 DecisionTreeCache

Pre-computed fast paths for frequently-encountered situations. Rather than re-scoring all fragments on every compilation, the cache stores the optimal fragment sets for known situation hashes.

The cache is rebuilt periodically from the compilations table, selecting the fragment sets that produced the best outcomes for each situation pattern.

5. The Sculptor’s Theorem

5.1 Statement

Theorem (Convergence to Oracle Prompt): Let P* be the oracle prompt — the theoretical prompt that would prevent all preventable failures for any situation s ∈ S. Let P_n be the prompt compiled by the Lacuna Engine after observing n failure events. Then under mild conditions:

lim (n → ∞) d(P_n, P*) = 0

where d is a suitable distance metric on prompt space (e.g., preventable-failure rate).

5.2 Proof Sketch

The proof relies on three properties:

  1. Failure space is bounded. For any finite system operating in a finite domain, the set of distinct failure modes is finite (though potentially large). Each failure mode corresponds to exactly one lacuna.

  2. Each lacuna removes at least one failure mode. When a failure is observed and its corresponding fragment is added to the library with appropriate trigger conditions, that specific failure mode is prevented in future compilations for matching situations. The fragment may not prevent all instances of the broader failure class, but it prevents at least the observed instance.

  3. Bayesian effectiveness converges. As fragments accumulate outcomes, the Bayesian effectiveness scores converge to their true values. Effective fragments (those that actually prevent failures) rise; ineffective fragments (those that were incorrectly attributed or are no longer relevant) sink and are retired.

Given (1), there are at most K distinct failure modes. Given (2), each observed failure adds a fragment that prevents at least one mode. Given (3), the effectiveness scores ensure that only genuinely preventive fragments persist. Therefore, after at most K failures, the compiled prompt contains fragments addressing all failure modes — i.e., P_n = P*.

In practice, convergence is faster than K because: - Many fragments prevent multiple related failure modes - The decision tree cache accelerates retrieval for recurring situations - New fragment discovery from system introspection (ouroboros, refractive will) anticipates failures before they occur

5.3 The Sculptural Interpretation

Michelangelo, when asked how he created David, reportedly said: “I saw the angel in the marble and carved until I set him free.”

The Lacuna Engine carves the oracle prompt from the marble of ignorance. Each failure chips away a piece of what-the-system-didn’t-know. What remains — the negative space of all accumulated lacunae — is expertise.

This is not a metaphor. It is a precise description of the algorithm. The oracle prompt P* exists in the intersection of all prompts that don’t cause each observed failure. The Lacuna Engine converges on it by subtraction.

6. Connection to MASCOM

The Lacuna Engine integrates with three existing MASCOM subsystems that are themselves lacuna generators:

6.1 Refractive Will

The Refractive Will system (Paper #7) predicts what the operator would type across all terminals. When a prediction fails — when the operator does something unpredicted — that’s a lightning bolt: a lacuna in the system’s model of operator intent.

Lightning bolts become constraint fragments in the Lacuna Engine. They encode things the system should have known but didn’t — the negative space of operator modeling.

Successful predictions become domain fragments with effectiveness scores proportional to their prediction accuracy. The Refractive Will’s pattern table maps directly to fragment trigger conditions.

6.2 Cognitive Ouroboros

The Ouroboros (Paper #23) runs cyclic self-improvement: test scenarios → score quality → identify gaps → apply fixes → retest. Each quality gap is a lacuna — a specific deficiency the system exhibited.

Ouroboros gaps become constraint fragments: “Quality gap detected in {category}. Verify explicitly before completing task.” The cycle’s score becomes the effectiveness signal: if the gap was addressed and the score improved, the corresponding fragment is reinforced.

6.3 Healing Cycles

The v6 code engine’s self-healing system detects errors, diagnoses root causes, and applies fixes. Each healing cycle is a (trigger → diagnosis → fix) triple — which maps directly to a lacuna triple (situation → missing knowledge → corrective fragment).

Healing cycles with positive quality deltas produce recovery fragments. The trigger becomes the fragment’s trigger conditions. The fix becomes the fragment’s text. The quality delta becomes the initial effectiveness score.

7. Implications

7.1 The End of Manual Prompt Engineering

If the Lacuna Engine works as theorized — if compiled prompts converge to oracle prompts — then manual prompt engineering becomes obsolete. Not because a better prompting technique was discovered, but because the system discovers the optimal prompt for each situation automatically from operational data.

This is the prompt engineering equivalent of the compiler revolution in programming. Early programmers wrote machine code by hand. Then compilers automated the translation from high-level intent to optimal machine instructions. The Lacuna Engine compiles high-level task descriptions into optimal prompts.

7.2 Self-Improving AI Systems

The Lacuna Engine is a concrete mechanism for AI self-improvement that is: - Bounded: Convergence is guaranteed for finite failure spaces - Auditable: Every fragment has a source, an effectiveness score, and a history - Reversible: Fragments can be retired, compilations can be traced, decisions can be explained - Safe: The system can only improve prompts, not modify its own code or capabilities

This addresses a key concern in AI safety: how to build systems that improve without becoming unpredictable. The Lacuna Engine improves by accumulating constraints (things not to do), which is inherently more conservative than improving by expanding capabilities.

7.3 Expertise as Compiled Absence

Perhaps the most profound implication is epistemological. If expertise can be reconstructed from the absence of failure, then expertise itself is better understood as a set of constraints than as a set of capabilities.

An expert isn’t someone who knows everything about their domain. An expert is someone who has internalized — through experience and failure — all the things that don’t work. The Lacuna Engine makes this internalization explicit, systematic, and transferable.

The sculptor doesn’t add marble to create the statue. The sculptor removes what isn’t the statue. The Lacuna Engine doesn’t add knowledge to create expertise. It removes what isn’t expertise.

What remains is indistinguishable from the real thing.

8. References