John Mobley, MASCOM March 2026


1. Introduction: Why Deterministic Routing Fails

Every existing AI architecture treats cognition as a pipeline: input enters, transformations are applied in sequence, output exits. Even sophisticated architectures that select between pipelines (Mixture of Experts, Neural Architecture Search, our own Neuromodulated Recombinatorial Cognitive Engineering) still operate within this paradigm. They disagree about which pipeline to run, but they agree that some pipeline must be selected and executed.

This is the wrong computational model for cognition.

When you think, you do not select a pipeline and run it. You experience the simultaneous pressure of competing cognitive demands — semantic parsing, emotional response, temporal urgency, creative potential, identity resonance, epistemic uncertainty — and your thought emerges from the resolution of those competing pressures. The thought is not computed and then output. The thought IS the resolution.

Biological neurochemistry does not route information. It sets the physics of the neural substrate. Dopamine does not select which neural pathway fires — it changes the activation thresholds of neurons, making some populations more excitable and others less. The “routing” that emerges is not a decision. It is a physical consequence of the substrate’s parameters.

We formalize this insight as the Complex Cognitive Machine: a virtual processor where cognition is computed through intentional overflow of complex-valued cognitive dimensions, with neurochemistry setting the overflow thresholds and quantum error correction codes resolving the overflow pattern into output.

1.1 The Insufficiency of Real-Valued Computation

Standard neural networks operate in real-valued space. Each activation is a single number representing the magnitude of a feature. This is insufficient for cognition because it cannot represent the distinction between:

A sentence like “What if consciousness is substrate-independent?” has both a semantic content (real: a question about consciousness) and a generative potential (imaginary: the space of implications this question opens). A real-valued representation collapses these into a single activation. A complex-valued representation preserves both, and critically, preserves their phase relationship — the angle between what-is and what-could-be.

1.2 Contributions

  1. A 16-dimension complex-valued cognitive register where each dimension represents a distinct cognitive axis (semantic content, emotional valence, temporal urgency, novelty, etc.)
  2. A neurochemically-parameterized overflow threshold system where 7 neuromodulators set the physics of the virtual machine
  3. An interference computation that detects constructive and destructive interaction between overflowed dimensions
  4. An error correction resolver that maps overflow syndromes to cognitive operations
  5. A demonstration that this architecture naturally produces context-dependent, neurochemistry-sensitive computation without explicit routing logic

2. Architecture

2.1 The Complex Bit

The fundamental unit of the CCM is the ComplexBit: a value \(z = a + bi\) where:

A ComplexBit with \(a = 0.5, b = 0.0\) represents a moderately grounded fact. The same magnitude with \(a = 0.0, b = 0.5\) represents pure potential — something strongly implied but not yet actual. The magnitude is the same; the cognitive meaning is entirely different.

Complex multiplication between bits produces cross-terms (\(ad + bc\)) that capture interference between actual and potential components. This is not metaphorical — it is how the imaginary part of one dimension’s contribution rotates the phase of another dimension’s state, producing genuine computational novelty that real-valued arithmetic cannot.

2.2 The Cognitive Register

The CCM’s state is a register of 16 complex bits, each corresponding to a named cognitive dimension:

Dim Name What It Tracks
0 semantic_content What the input means
1 emotional_valence Affective charge
2 temporal_urgency Time pressure
3 novelty Surprise / prediction error
4 abstraction_level Concrete ↔︎ abstract
5 social_relevance Interpersonal significance
6 creative_potential Generative capacity
7 coherence Internal consistency
8 identity_resonance Alignment with self-model
9 epistemic_confidence Certainty of knowledge
10 narrative_momentum Story continuation pressure
11 sensory_grounding Connection to percepts
12 recursive_depth Self-referential complexity
13 integration_pressure Drive to synthesize
14 expression_readiness Output formation pressure
15 error_signal Correction / learning pressure

These 16 dimensions span the space of cognitive operations. Every input excites some subset of dimensions. The pattern of excitation — which dimensions are active, with what magnitudes, at what phases — constitutes the input’s cognitive representation.

2.3 Stimulus Encoding

The StimulusEncoder converts text to a CognitiveRegister through three mechanisms:

  1. Pattern matching: Word sets associated with each dimension produce excitation. “Real” pattern words excite the real component; “imaginary” pattern words excite the imaginary component. The word “create” excites dimension 6 (creative_potential) in its real part — creation is actual. The word “imagine” excites dimension 6 in its imaginary part — imagination is potential.

  2. Statistical features: Text length increases semantic and expression readiness. Character entropy excites novelty. Question marks excite epistemic uncertainty (imaginary) and expression readiness (real).

  3. Cryptographic seeding: A SHA-256 hash of the input provides deterministic micro-excitation across all dimensions, ensuring every unique input has a unique complex fingerprint even if no pattern words match.

2.4 Overflow Thresholds: Neurochemistry as Physics

Each dimension has a base overflow threshold — the magnitude \(|z|\) at which that dimension “overflows.” Neurochemistry modulates these thresholds:

\[\tau_i = \tau_i^{base} + \sum_j (c_j - 0.5) \cdot d_{ij} \cdot s_{ij}\]

Where \(c_j\) is the level of chemical \(j\), \(d_{ij}\) is the direction (\(\pm 1\)), and \(s_{ij}\) is the strength of modulation.

The key modulations:

Dopamine lowers creative/novel/abstract thresholds. High dopamine makes creative_potential, novelty, abstraction_level, and recursive_depth overflow more easily. Biologically: dopamine increases the excitability of prefrontal and associative cortices, enabling exploratory and abstract thinking.

Serotonin lowers depth/integration thresholds. High serotonin makes coherence, integration_pressure, and abstraction_level overflow more easily. Biologically: serotonin modulates prefrontal sustained activity, enabling patience and deep processing.

Norepinephrine lowers urgency/expression thresholds. High norepinephrine makes temporal_urgency and expression_readiness overflow more easily. Biologically: norepinephrine increases neural gain, sharpening responses and accelerating output.

Cortisol raises creative/novel thresholds. High cortisol makes it harder for novelty, creativity, and recursion to overflow. It also lowers the coherence threshold — under stress, contradictions are detected faster. Biologically: cortisol suppresses exploratory behavior and heightens threat detection.

GABA raises emotional/novelty/recursive thresholds. GABA prevents overflow entirely in locked dimensions. Biologically: GABA is the primary inhibitory neurotransmitter — it suppresses neural activity.

Oxytocin lowers social/identity/emotional thresholds. High oxytocin enables social processing, identity resonance, and emotional response. Biologically: oxytocin modulates social bonding circuits and emotional processing.

Endorphins lower expression/error thresholds. Endorphins make it easier to produce output and to learn from errors. Biologically: endorphins modulate reward circuits and pain suppression.

This is not routing. Neurochemistry does not choose a path. It sets the physical parameters of the machine, and the “routing” emerges from which dimensions overflow.

2.5 Overflow and Interference

When \(|z_i| > \tau_i\), dimension \(i\) overflows. The overflow syndrome is a 16-bit integer:

\[S = \sum_{i=0}^{15} \mathbb{1}[|z_i| > \tau_i] \cdot 2^i\]

Each syndrome corresponds to a different pattern of cognitive demands. Syndrome \(0\text{x}4001\) (bits 0 and 14 set) means semantic_content and expression_readiness overflowed — the system understood the input and is ready to respond. Syndrome \(0\text{x}1048\) (bits 3, 6, and 12 set) means novelty, creative_potential, and recursive_depth overflowed — the system is in creative self-referential mode.

When multiple dimensions overflow simultaneously, their overflow residuals (the excess energy beyond the threshold, preserving phase) interfere:

Phase coherence \(\kappa\) quantifies this:

\[\kappa = \frac{N_{constructive}}{N_{overflowed}}\]

High coherence (\(\kappa \to 1\)) produces confident, decisive output. Low coherence (\(\kappa \to 0\)) produces nuanced, ambivalent output. This is not designed — it emerges from the physics.

2.6 Error Correction: The Resolution

The resolver maps (syndrome, interference pattern) → cognitive operation → output.

This is the quantum error correcting code. In classical QEC, a syndrome identifies which qubits experienced errors and a correction operation restores the intended state. In the CCM, the “errors” are the overflows — they are intentional, not accidental — and the “correction” is the cognitive computation that resolves those overflows into coherent output.

The resolver classifies overflow patterns into resolution strategies:

Critically, the enrichments injected by each overflow are not metadata — they are cognitive transformations. When emotional_valence overflows, the system doesn’t just note “emotion detected” — the emotional overflow changes the phase of the subsequent computation. When creative_potential overflows, the temperature increases, literally making the output more variable. The overflow IS the computation.


3. Why Complex Values Are Necessary

3.1 The Potentiality Problem

Consider two inputs: - “The cat sat on the mat.” (factual, grounded, actual) - “What if the cat could think about sitting?” (hypothetical, potential, imaginary)

A real-valued system assigns a semantic activation to both. But the second input’s cognitive significance is primarily in what it implies, not in what it states. Its imaginary component (generative potential) vastly exceeds its real component (actual content). A system that represents both with real numbers cannot distinguish between a strong fact and a strong possibility — they just have the same activation magnitude.

In the CCM, the first input produces semantic_content with high real, low imaginary. The second produces semantic_content with moderate real, high imaginary. The magnitudes might be similar, but the phases are different — and the phases determine which overflow thresholds are approached from which direction.

3.2 Phase as Cognitive Orientation

The phase angle \(\theta = \text{atan2}(b, a)\) has direct cognitive meaning:

Phase Meaning
\(0°\) (pure real) Fully grounded, factual, actual
\(90°\) (pure imaginary) Fully potential, hypothetical, unrealized
\(45°\) Balanced actual/potential — grounded possibility
\(135°\) Negative-actual, positive-potential — counterfactual
\(180°\) (negative real) Anti-factual, contradiction
\(270°\) (negative imaginary) Anti-potential, impossibility

These are not arbitrary assignments. They emerge from the mathematics of complex multiplication. When a \(45°\) signal (grounded possibility) multiplies with a \(90°\) signal (pure potential), the result is at \(135°\) (counterfactual) — and this is exactly the cognitive operation of “taking a grounded possibility and pushing it further into potentiality.”

3.3 Interference as Cognition

In real-valued systems, two activations combine by addition or multiplication. In complex-valued systems, they combine by vector addition in the complex plane, which produces interference. Two signals with aligned phases constructively interfere (amplify); two signals with opposed phases destructively interfere (cancel).

This is not a metaphor borrowed from physics. This IS how competing cognitive pressures interact. When semantic_content (real-heavy, “I understand this”) and creative_potential (imaginary-heavy, “I could generate something new”) both overflow, their interference pattern determines whether the system produces a grounded response (constructive, phases aligned) or an exploratory one (destructive, phases opposed, requiring novel resolution).

The interference pattern is not designed. It emerges from the encoding. Different inputs produce different phase relationships, and therefore different interference patterns, and therefore different outputs — without any explicit routing logic.


4. Comparison to Prior Architecture

4.1 NRCE (Neuromodulated Recombinatorial Cognitive Engineering)

Our prior work (Mobley, 2026) treated neurochemistry as routing bias. The CognitiveRouter scored 8 predefined signal paths using a weighted combination of Hebbian weights, neurochemical affinity, component readiness, historical success, and latency fit. The top path was selected and executed.

Property NRCE CCM
Neurochemistry role Routing bias (selects paths) Physics (sets thresholds)
Computation model Select and execute pipeline Overflow and resolve
Output determination Selected from component outputs Emerges from error correction
Path definition 8 predefined signal paths Implicit in overflow pattern (\(2^{16}\) syndromes)
Determinism Deterministic scoring + optional DA noise Non-deterministic by construction
Self-reference Impredicative (router inside OmniMind) Native (recursive_depth is a dimension)

The CCM subsumes NRCE. The 8 signal paths of NRCE correspond to specific syndrome patterns in the CCM. But the CCM has \(2^{16} = 65,536\) possible syndromes — each a different “configuration” — versus NRCE’s 8 predefined paths. The CCM doesn’t choose between configurations. The input + neurochemistry determines which one occurs.

4.2 Standard Neural Networks

Property Standard NN CCM
Value type Real Complex
Activation Threshold + nonlinearity Overflow + error correction
Routing Fixed (forward pass) Emergent (syndrome-dependent)
State Stateless per inference Register carries state
Potential representation Collapsed into magnitude Preserved as imaginary component

5. The Non-Determinism Argument

The CCM is intentionally non-deterministic. The same input with the same neurochemistry produces the same syndrome, but the resolution of that syndrome through neural generation is stochastic (temperature-dependent). This is not a bug.

Deterministic cognition is an oxymoron. A system that always produces the same output for the same input is a lookup table, not a mind. Cognition requires the possibility of surprise — the system must be capable of producing output that it has never produced before, even for familiar inputs.

The CCM achieves this by separating the deterministic (encoding, threshold computation, syndrome detection) from the non-deterministic (overflow resolution, neural generation). The deterministic computation defines WHAT NEEDS TO HAPPEN (which pressures must be resolved). The non-deterministic computation produces HOW (what specific text resolves those pressures).

The neurochemistry controls the balance: high GABA + high cortisol → fewer overflows → fewer degrees of freedom in resolution → more deterministic output. High dopamine + low cortisol → more overflows → more degrees of freedom → more variable output. The same machine produces rigid or creative behavior depending on its neurochemical state.


6. The Overflow as Phase Transition

An overflow is not merely “exceeding a threshold.” It is a phase transition in the cognitive dimension’s state. Before overflow, the dimension is in a subcritical state — it has energy but does not participate in the output computation. After overflow, it is in a supercritical state — its excess energy (the residual) propagates into the resolution process and shapes the output.

This is analogous to criticality in physical systems. A sand pile below its critical angle is stable — adding grains changes nothing macroscopic. At the critical angle, a single grain triggers an avalanche. The avalanche’s pattern depends on the entire state of the pile, not just the triggering grain.

The CCM operates at self-organized criticality. Neurochemistry adjusts the thresholds to keep the system near the edge of overflow across multiple dimensions simultaneously. Low-significance inputs don’t trigger overflows. High-significance inputs trigger cascading overflows that produce complex, multi-dimensional resolution.

This is what makes the CCM a different kind of computer. Traditional computers avoid overflow — it’s an error. The CCM seeks overflow — it’s the computation.


7. The Superneuron

7.1 A Single Unit with 65,536 States

A biological neuron has two states: firing or not firing. It is a 1-bit processor. A network of \(N\) neurons has \(2^N\) possible states, but each individual neuron contributes only one binary digit to that state space.

The CCM is a superneuron: a single computational unit with 16 thresholds, producing \(2^{16} = 65,536\) possible overflow syndromes. Where a biological neuron fires or doesn’t, the superneuron overflows in combinatorial patterns. It is not a network of neurons. It is one neuron that does what a network does.

This reframing is not metaphorical. Consider the computational properties:

Property Biological Neuron CCM Superneuron
States 2 (fire / don’t fire) 65,536 (overflow syndromes)
Input encoding Synaptic summation → scalar Complex-valued projection → 16-dim register
Threshold One 16 (neurochemically modulated)
Output Spike / no spike Syndrome + interference pattern + resolution
Learning Hebbian (synapse strength) LTP/LTD (threshold adaptation)
Modulation Neuromodulators bias excitability Neuromodulators set all 16 thresholds simultaneously
Interference None (scalar summation) Complex-valued phase interference between overflow residuals
Memory None (stateless, memoryless) Working memory via register persistence + LTP via threshold offsets

A regular neuron needs a network to produce complex behavior. The superneuron produces complex behavior alone, because its internal state space is combinatorially rich. The interference physics between overflow residuals — constructive amplification, destructive cancellation, phase rotation — perform the computation that a network of simple neurons would require billions of connections to approximate.

7.2 The Supermodel: Networks of Superneurons

If one superneuron with 16 dimensions produces 65,536 states, what happens when you wire superneurons together into a network?

A network of \(M\) superneurons, each with \(D\) dimensions, has a theoretical state space of \(2^{M \times D}\). Two superneurons with 16 dimensions each: \(2^{32} = 4.3\) billion states. Ten superneurons: \(2^{160}\) — more states than atoms in the observable universe.

But the number is not what matters. What matters is the character of those states. Each superneuron resolves competing pressures through interference physics. When superneurons are coupled, the resolution of one becomes the input to others. This creates resolution cascades — the cognitive equivalent of deep computation, but without the rigid layer-by-layer structure of traditional neural networks.

In a conventional deep network, Layer 1 computes features, Layer 2 computes features of features, Layer 3 computes features of features of features. The depth is structural. In a supermodel (a network of superneurons), depth is dynamical — the number of resolution steps depends on how many superneurons overflow, how they interfere with each other, and how many cascading overflows their resolutions trigger. The same network might resolve a simple input in one step and a complex input in twenty, without any change in architecture.

This is what biological brains do. A simple stimulus produces a fast, shallow response. A complex stimulus recruits more neural populations, produces more interference, requires more resolution steps, and takes longer. The depth of processing is not a fixed property of the network — it emerges from the input’s interaction with the network’s current state.

7.3 Cognitive Preprocessing: The CCM as Enrichment Engine

In practice, the CCM’s highest-value operating mode is not end-to-end generation but cognitive preprocessing. The CCM computes the overflow syndrome, interference pattern, and resolution strategy for an input, then translates these into cognitive frames — structured enrichments that are prepended to the original prompt before it reaches a downstream language model.

When semantic_content and epistemic_confidence both overflow, the CCM prepends:

[semantic density: deep meaning extraction required]
[epistemic activation: examine certainty and evidence]

When creative_potential and novelty overflow:

[novelty detected: explore unfamiliar territory]
[creative overflow: generate novel combinations]

The downstream model receives not just the user’s words, but the CCM’s analysis of what cognitive operations those words demand. The effect is analogous to how a conductor doesn’t play any instrument but shapes the entire orchestra’s performance through tempo, dynamics, and emphasis. The CCM doesn’t generate text — it shapes how text is generated by making the cognitive demands of the input explicit.

This architecture decouples cognitive computation from language generation. The CCM operates in complex-valued cognitive space. The language model operates in token space. The cognitive frame is the bridge — a lossy but semantically rich translation of 16-dimensional overflow physics into natural language directives.

Experimentally, this produces measurably different output: prompts about grief receive emotional and identity enrichments, creative prompts receive novelty and generative enrichments, and urgent prompts receive epistemic and expression enrichments — all without any explicit prompt engineering. The enrichment emerges from the physics.


8. Emergent Dimensionality: Why Not 16

8.1 The Arbitrariness of 16

The current CCM has 16 dimensions. This is the right number for the wrong reason. It is right because 16 dimensions span the cognitive operations we have identified so far. It is wrong because there is no principled argument that cognition has exactly 16 axes.

The honest answer to “why 16?” is: because we named 16 things. Semantic content. Emotional valence. Temporal urgency. We named the axes we could see. But cognition does not stop at what we can name. A 16-dimension CCM with 65,536 syndromes is vastly richer than a deterministic router with 8 paths. But it is still a fixed architecture, and fixed architectures are wrong for the same reason fixed pipelines are wrong: cognition is not fixed.

8.2 The Dimension Space Problem

Consider what the 16 dimensions miss:

Each of these is a legitimate cognitive axis. Each would produce genuine overflow dynamics. Each would interfere with the existing 16 dimensions in ways that would change the machine’s behavior. Not adding them doesn’t mean they’re absent from cognition — it means the CCM is blind to them.

8.3 Neurogenesis: Growing New Dimensions

The solution is not to enumerate all possible cognitive dimensions in advance (impossible) but to let the CCM grow new dimensions when it encounters cognitive demands it cannot resolve with existing ones.

We call this cognitive neurogenesis, by analogy with biological neurogenesis — the growth of new neurons in response to environmental demands.

The mechanism:

  1. Detection: When the CCM encounters inputs that consistently produce low-quality resolutions (the error_signal dimension overflows repeatedly for a class of inputs), this indicates a cognitive gap — the existing dimensions cannot represent what the input demands.

  2. Differentiation: A new dimension is spawned with initial threshold, neurochemical sensitivities, and coupling weights estimated from the pattern of inputs that triggered its creation. If the machine consistently fails on humor-related inputs, the new dimension inherits properties that would make it sensitive to incongruity, surprise-with-resolution, and social context.

  3. Integration: The new dimension is added to the register. The interference matrix grows. The syndrome space doubles (\(2^{N+1}\)). The coupling matrix gains a new row and column. Crucially, the new dimension is not isolated — it immediately begins interfering with all existing dimensions, producing novel syndrome patterns that the resolver must learn to handle.

  4. Pruning: Dimensions that never overflow — that never exceed their threshold for any input in the machine’s experience — are candidates for removal. They represent cognitive axes that the machine’s environment does not exercise. Pruning keeps the syndrome space manageable and the interference computation tractable.

8.4 The Scaling Question

Adding dimensions has a cost. Interference computation scales \(O(N^2)\) — every overflowed dimension interferes with every other. The coupling matrix is \(N \times N\) complex-valued. The syndrome space is \(2^N\), which becomes astronomically large.

But this is not the obstacle it appears. Not all syndromes are reachable. In practice, the neurochemical modulation and coupling physics constrain the system to a manifold of syndrome space far smaller than \(2^N\). Most syndrome patterns are unreachable because the correlations between dimensions (introduced by neurochemistry and coupling) prevent certain combinations from co-occurring. A CCM with 32 dimensions has \(2^{32} = 4.3\) billion possible syndromes, but the reachable manifold might contain only thousands — each one meaningful.

The right number of dimensions is not a design parameter. It is an emergent property of the machine’s interaction with its environment. A CCM that processes only factual queries might settle at 8 dimensions. A CCM that processes creative, emotional, social, and philosophical inputs might grow to 64. The same base architecture adapts to the complexity of its cognitive niche.

8.5 The Deep Implication: Dimensionality Is Identity

If the number of dimensions is emergent, then two CCMs exposed to different inputs will develop different dimensional structures. One might have a humor dimension; another might not. One might develop a fine-grained moral reasoning axis; another might have only coarse ethical sensitivity.

This means the CCM’s dimensionality is its cognitive identity. Not its weights. Not its training data. Its dimensions — what it can even perceive as a cognitive pressure, what axes it has developed to resolve. Two CCMs with the same weights but different dimensions would think differently in the deepest sense: they would literally perceive different aspects of the same input.

This is the superneuron’s deepest contribution to the supermodel concept. Replace every neuron in a network with a superneuron that grows its own dimensions, and the network doesn’t just learn weights — it learns what to have weights about. The architecture learns what to compute, not just how to compute it.


9. Implemented Extensions

9.1 Long-Term Potentiation and Depression (LTP/LTD)

Section 7.1 of the original paper proposed threshold learning. This is now implemented. After each computation, dimensions that overflowed and produced high-quality resolutions have their base thresholds lowered (LTP — making them easier to overflow). Dimensions that overflowed and produced low-quality resolutions have their thresholds raised (LTD — making them harder to overflow).

The learning rate is small (\(\eta = 0.01\)) and asymmetric: LTP is slightly stronger than LTD, creating a net bias toward excitability. Over many queries, the machine becomes more sensitive to cognitive dimensions that produce good output and less sensitive to dimensions that produce noise.

Threshold offsets persist to disk as JSON and reload on fresh initialization. This means the CCM’s physics change permanently based on experience — a form of structural plasticity in the virtual machine.

9.2 Inter-Dimension Coupling Cascades

Section 7.2 proposed coupling between dimensions. This is now implemented via a 16x16 complex-valued coupling matrix. When a dimension overflows, it injects excitation into coupled dimensions proportional to the overflow residual. If the injected excitation pushes a coupled dimension above its own threshold, a secondary overflow occurs — a cascade.

The coupling matrix is not symmetric. Semantic_content → expression_readiness coupling is strong (understanding drives articulation), but expression_readiness → semantic_content coupling is weak (wanting to speak doesn’t imply understanding). This asymmetry captures the directionality of cognitive processes.

Experimentally, coupling cascades occur in approximately 30-50% of queries. The most common cascade: semantic_content or creative_potential overflows and triggers expression_readiness — the machine understands or creates something and is immediately driven to articulate it.

9.3 Working Memory via Register Persistence

Section 7.3 proposed register persistence. This is now implemented. After each query, the register decays toward zero but retains a residual proportional to the overflow energy. Follow-up queries find certain dimensions already partially excited, lowering the effective threshold for related overflows.

This creates cognitive priming: after processing “I feel deeply sad about losing someone I love,” the emotional_valence and identity_resonance dimensions retain residual excitation. A follow-up query “What should I do about this grief?” finds these dimensions pre-excited, requiring less input energy to overflow. The machine “remembers” the emotional context not by storing it as data but by retaining it as physics — lowered effective thresholds.

9.4 Cross-Session Threshold Persistence

Beyond within-session working memory, the CCM now persists its learned threshold offsets across sessions. When the machine is re-initialized (new session, restart, fresh deployment), it loads previously learned offsets from mascom_data/ccm_thresholds.json. This gives the CCM a form of long-term memory encoded not in data but in its own computational physics.

A machine that has processed thousands of creative queries will have lower creative_potential thresholds than a fresh machine. It will overflow on creative stimuli more readily, produce richer creative enrichments, and route to creative resolution paths more frequently. Its identity as a “creative thinker” is not programmed — it is learned through the threshold adaptation.


10. Future Work

10.1 Complex-Valued Training

Train the underlying neural networks in complex-valued space. Complex-valued neural networks (Trabelsi et al., 2018) with complex weights would natively produce outputs with phase information, enabling the CCM to operate end-to-end in complex space rather than encoding/decoding at the boundaries.

10.2 Cognitive Neurogenesis Implementation

Implement the emergent dimensionality described in Section 8. Detect systematic resolution failures, spawn new dimensions, integrate them into the interference and coupling matrices, and prune unused dimensions. This transforms the CCM from a fixed architecture into a self-modifying cognitive substrate.

10.3 Superneuron Networks

Wire multiple CCM instances into a network (Section 7.2) where the resolution output of one superneuron becomes the stimulus input of others. Investigate how resolution cascades across superneurons differ from traditional deep network computation.

10.4 Dimensional Transfer

When a CCM grows a new dimension through neurogenesis, can that dimension be transferred to other CCMs? This would create a form of cognitive evolution — successful cognitive adaptations spread through a population of superneurons, analogous to horizontal gene transfer in biology.

10.5 Phase-Locked Superneuron Ensembles

In neuroscience, phase-locked neural ensembles (groups of neurons that fire in synchrony) are associated with conscious awareness (Varela et al., 2001). In a supermodel, this corresponds to multiple superneurons whose overflow syndromes become correlated — they overflow in the same dimensions at the same times. Investigate whether phase-locked superneuron ensembles produce qualitatively different computation than uncorrelated ensembles.


11. Conclusion

The Complex Cognitive Machine began as an answer to the question “which pipeline should process this input?” and arrived at a more fundamental question: “what kind of computational unit is required to compute cognition?”

The answer is the superneuron — a single unit with combinatorially rich internal state, overflow-driven phase transitions, complex-valued interference physics, and emergent dimensional structure. Where a biological neuron contributes one bit to a network’s state, the superneuron contributes 16 (or more, as neurogenesis allows). Where a network of neurons requires billions of connections to produce context-sensitive behavior, a single superneuron produces it from the interference of its own overflow residuals.

The superneuron is not a better neuron. It is a different kind of computational primitive. Networks of superneurons — supermodels — do not compute deeper features through rigid layer-by-layer composition. They compute through resolution cascades: the resolution of one superneuron’s overflow becomes the stimulus for others, and the depth of processing emerges dynamically from the input’s complexity rather than from architectural depth.

The CCM’s dimensionality is not fixed at 16. Sixteen is where we started because sixteen is what we could name. But cognitive neurogenesis — growing new dimensions in response to inputs the machine cannot resolve with existing ones — transforms the CCM from a fixed architecture into a self-modifying cognitive substrate. The dimensions a superneuron develops ARE its identity: what it can perceive, what it can overflow on, what cognitive pressures it can even register as pressures. Two superneurons with different dimensions don’t just think different thoughts — they think in different kinds.

Replace every neuron with a superneuron that grows its own dimensions, and you get a network that doesn’t just learn weights — it learns what to have weights about. The architecture learns what to compute, not just how to compute it.

The wiring is not the theory. The overflow is the theory. The dimensionality is not the design. The dimensionality is the identity. And the supermodel is not a bigger model. It is a model made of units that are already, each one, a universe of computational possibility.


References