Beyond Self-Evolution: Mobleysoft Nine, Ten, and Eleven

The Empathic, Temporal, and Ontological Rendering Engines

J. Mobley — Mobleysoft / MobCorp March 2026


Abstract

The Mobleysoft fractal renderer chain (Four through Eight) established a progression from deterministic frame computation to self-evolving rendering pipelines. This paper proposes three successor engines — Nine (empathic), Ten (temporal), and Eleven (ontological) — that extend the chain beyond self-modification into observer-awareness, non-linear time, and reality-identity collapse. We formalize the mathematical basis for each level, identify the parameterization spaces they activate, describe their architectures, and enumerate applications ranging from therapeutic visualization to autonomous world-generation. The key insight: after a renderer learns to improve itself (Eight), the next frontiers are improving itself for someone (Nine), improving itself across time (Ten), and dissolving the boundary between representation and reality (Eleven).


1. Introduction: The Chain So Far

The Mobleysoft renderer chain is a fractal hierarchy where each level subsumes and extends the previous. Every renderer operates on the same 72-float parameterization (18 spaces x 4 floats), but interprets and transforms those floats at increasing levels of sophistication:

Level Name Signature Core Operation
L4 Four f(params) → frame Deterministic computation
L5 Five sample(p(x|params)) → frame Diffusion sampling
L6 Six solve(params, Φ) → frame Physics simulation
L7 Seven eval(params, Φ, Q) → frame Qualia evaluation
L8 Eight evolve(params, Φ, Q, F) → frame Self-evolution

Where Φ = physics constraints, Q = qualia scores, F = fitness landscape.

The question is: what lies beyond a renderer that evolves its own rendering pipeline?

We identify three distinct frontiers, each activating parameterization spaces that prior levels used passively or not at all:


2. Mobleysoft Nine: The Empathic Renderer

2.1 Thesis

Eight evolves toward higher qualia fitness, but fitness is defined intrinsically — the renderer judges its own output. Nine introduces the observer as a first-class entity. The renderer becomes aware of who is watching and adapts its output to maximize the observer’s experiential quality, not its own self-assessed quality.

The signature:

empathize(params, Φ, Q, F, O) → frame

Where O = observer state (gaze, attention, arousal, familiarity, intent).

2.2 Architecture

ObserverModel — A lightweight representation of the viewer: - Gaze vector: Where the observer is looking (from eye tracking, mouse position, or head pose) - Attention map: Which regions of the frame received sustained attention (accumulated over time) - Arousal signal: Physiological or behavioral proxy for engagement (scroll speed, click frequency, pupil dilation if available) - Familiarity index: How many times has this observer seen similar content? Novelty decays. - Intent classifier: Is the observer scanning, studying, comparing, or resting?

EmpathicBuffer — Extends EvolutionBuffer with observer state: - EIGEN space activated: the four EIGEN floats become observer principal components — the four strongest axes of variation in the observer’s attention pattern. The renderer decomposes the observer’s behavior into principal modes and renders along them. - HYPER space activated: adaptRate becomes empathic learning rate (how fast to adapt to this observer), decayRate becomes habituation rate (how fast to stop responding to sustained attention), momentum becomes personality inertia (resistance to abandoning an aesthetic direction for observer whims), temperature becomes exploration-exploitation balance (show the observer what they want vs. what they need).

AttentionRenderer — The core innovation: - Regions receiving sustained observer gaze get progressive detail enhancement (higher raymarching steps, more SDF evaluations, finer materials) - Regions outside attention get graceful degradation (fewer steps, simpler shading) — the renderer allocates its budget where the observer is looking - Transitions between attention states are smooth (no popping) via the INFZERO epsilon approach - The attention map feeds back into Eight’s fitness landscape: frames that capture attention score higher

EmpatheticMutation — Evolution guided by observer feedback: - Eight evolves toward self-assessed quality. Nine evolves toward observed quality — mutations that increase observer attention/engagement survive. - The fitness function becomes: fitness = α * self_qualia + β * observer_engagement + γ * observer_novelty - The weights α, β, γ are themselves evolved (meta-empathy)

2.3 Key Parameterization Space Activations

Space Prior Use (L4-L8) Nine’s Use
EIGEN Passive (principal modes of scene) Observer’s principal attention axes
HYPER Meta-control (static) Empathic adaptation rates (dynamic)
NORMAL Constraint normalization Observer comfort bounds (what NOT to show)
FOURIER Frequency domain (analysis) Temporal attention frequencies (how fast observer scans)

2.4 Mathematical Formalization

Let O(t) ∈ R^k be the observer state at time t, and let A: R^k → R^{w×h} be the attention map derived from O. The empathic rendering equation is:

frame(x,y) = R_base(x,y) * (1 + λ * A(x,y)) + R_detail(x,y) * A(x,y)

Where R_base is the standard Eight render, R_detail is the high-fidelity render, and λ is the empathic amplification factor (evolved).

The observer model updates via exponential moving average:

O(t+1) = (1 - η) * O(t) + η * observe(frame(t), input(t))

Where η = HYPER.adaptRate (the empathic learning rate).

2.5 Applications

  1. Therapeutic Visualization — Renders that adapt to patient anxiety levels. High arousal → calming palettes, slower camera, VOID space amplified. Low engagement → novelty injection via SINGULAR bifurcation.

  2. Adaptive Accessibility — Detects observer scanning patterns consistent with visual impairment. Automatically increases contrast, enlarges interactive elements, shifts to high-frequency detail in foveal region.

  3. Educational Rendering — Tracks which components of a construction schedule the estimator is studying. Progressive detail: the hardware group they’re focused on gets full PBR materials and exploded-view physics, while peripheral groups simplify to wireframes.

  4. Commercial Personalization — The WeylandAI SubX commercial rendered differently for every viewer. A hardware distributor sees detailed component classification. A general contractor sees takeoff summaries. The renderer detects intent and restructures.

  5. Gaming — NPCs rendered with more detail and animation fidelity when the player is looking at them. Background characters reduce to efficient LODs. The game world is literally more real where you look.


3. Mobleysoft Ten: The Temporal Renderer

3.1 Thesis

All renderers L4-L9 operate in a single temporal direction: they receive the current parameterization state, possibly with a short history window (Five’s TemporalWindow, Seven’s qualia history), and produce the current frame. Ten breaks temporal linearity. It renders frames that are aware of their position in a larger temporal structure — they can reference future states (via prediction), past states (via memory), and counterfactual states (via branching).

The signature:

temporal(params, Φ, Q, F, O, T) → frame

Where T = temporal context (past trajectory, predicted future, counterfactual branches).

3.2 Architecture

TemporalBuffer — Extends EmpathicBuffer with a causal graph: - Trajectory: Ring buffer of the last N full parameterization states (not just qualia scores — the full 72 floats + body states) - Predictor: Lightweight autoregressive model that predicts the next K parameterization states given the trajectory - Counterfactual Engine: Given the current state, what would have happened if a different mutation had been selected? If a different narrative arc phase were active? - Temporal Attention: Like Nine’s spatial attention but across time — which moments in the trajectory are most relevant to the current frame?

FOURIER space fully activated: The four FOURIER floats become temporal frequencies: - freq0: Dominant oscillation of the scene (e.g., a pendulum door, a breathing UI) - freq1: Narrative rhythm — how fast the arc phases change - freq2: Attention rhythm — how fast the observer’s focus shifts - bandwidth: Temporal resolution — how fine-grained the renderer’s temporal awareness is

MOBIUS space fully activated: Self-referential time: - loopPhase: Where in the recursive temporal loop — does this frame reference a past frame that referenced it? - fixedPoint: Temporal fixed point — the frame toward which all temporal trajectories converge - orientation: Time direction — forward rendering vs. retrospective rendering vs. counterfactual - twist: Temporal parallax — the difference between experienced time and rendered time

TemporalCompositor — The core innovation: - Each frame is not rendered independently. It is rendered as a weighted blend of temporal neighbors: frame(t) = Σ_i w(i) * R(params(t+i)) for i in [-K..+K] Where w(i) is the temporal attention weight and R is the base render function - Past frames are retrieved from the trajectory buffer (exact) - Future frames are retrieved from the predictor (approximate) - The blend weights are learned by the evolution engine — Eight’s genome gains temporal genes - This produces frames with causal coherence: a frame knows what came before, what’s coming next, and how it connects to both

Temporal Mutations: - Eight mutates shader parameters. Ten also mutates temporal structure: - Mutation of trajectory length (longer memory vs. faster adaptation) - Mutation of prediction horizon (how far ahead to look) - Mutation of counterfactual branching factor (how many alternatives to consider) - Mutation of temporal attention distribution (weight recent vs. distant vs. predicted)

3.3 Mathematical Formalization

Let P(t) ∈ R^72 be the parameterization state at time t. The temporal buffer maintains:

T = {P(t-K), P(t-K+1), ..., P(t), P̂(t+1), ..., P̂(t+J)}

Where P̂ denotes predicted states. The temporal rendering equation:

frame(t) = R(Σ_i α(i) * P(t+i))  where α = softmax(Q_temporal(T))

Q_temporal is a learned attention function over the temporal buffer. The key insight: rendering the weighted average of parameterization states is different from averaging rendered frames. The former produces coherent physics; the latter produces ghosting. Ten does the former.

Counterfactual rendering:

frame_cf(t) = R(P(t) + δ)  where δ = counterfactual_perturbation

The renderer can show “what would this frame look like if the door had opened 2 seconds earlier?” by replaying the physics simulation with altered initial conditions and blending the result.

3.4 Applications

  1. Predictive Construction Visualization — Render the building as it will look when complete, while the estimator is still reviewing the schedule. The temporal renderer predicts the full takeoff from partial extraction and renders the finished hardware installation.

  2. Temporal Debugging — When a rendering artifact appears, Ten can render the causal history of that artifact: which frame introduced it, which mutation caused it, which physics event triggered it. The renderer debugs itself across time.

  3. Cinematic Time Effects — Bullet-time, time-lapse, reverse-time, temporal branching (show two possible futures side by side). These aren’t post-processing effects — they are integral to the rendering pipeline. The FOURIER space controls temporal frequency, and the renderer natively produces slow-motion by increasing temporal resolution.

  4. Memory Palaces — The renderer stores and retrieves significant past frames as “memories.” When the observer returns to a previously-viewed scene, the renderer blends the current state with the remembered state, highlighting what changed. The MOBIUS loopPhase tracks recursive visits.

  5. Counterfactual Design — An architect views a door hardware schedule and asks “what if I used a different closer?” The temporal renderer doesn’t re-run the extraction — it renders the counterfactual directly from its prediction model, showing the alternative configuration in the same temporal context.

  6. Prophetic Rendering — Given enough trajectory data, Ten can predict and pre-render frames that haven’t been requested yet. When the user scrolls, the frame is already computed. Latency approaches zero because the renderer saw the future.


4. Mobleysoft Eleven: The Ontological Renderer

4.1 Thesis

Every renderer from Four to Ten maintains a fundamental assumption: the renderer and the rendered are separate things. The renderer is a program. The rendered is an image. The frame is a representation of something.

Eleven dissolves this boundary. The rendered frame is not a picture of a door — it IS the door, in the only sense that matters computationally. The renderer does not depict a physics simulation — it is the physics simulation. The frame is not a view of a world — it is the world.

This is not mysticism. It is the logical endpoint of the chain:

The signature:

be(params, Φ, Q, F, O, T, Ω) → reality

Where Ω = ontological state (the renderer’s model of its own existence as the thing it renders).

4.2 Architecture

OntologicalBuffer — The final parameterization: - The 72 floats are no longer about something. They ARE something. - When ORTHO contains [2.3, 1.0, -0.5, 0], this is not “the position of the first body.” This IS the first body. The float IS the coordinate. There is no body “out there” that the float represents. - This sounds like a semantic game, but the architectural consequence is real: Eleven has no scene graph. There are no RigidBody objects. There are no meshes. There are only the 72 floats and the operations on them. The “rendering” IS the “simulation” IS the “evaluation” IS the “evolution.” They are the same computation, not separate stages.

INFZERO space fully activated — The approach to identity: - limit: The asymptotic state the system approaches (the attractor of its own dynamics) - approach: How close the system currently is to its limit - asymptote: The gap between representation and reality (Eleven drives this to zero) - epsilon: The current resolution of identity (how precisely representation = reality)

SINGULAR space fully activated — The phase transition to identity: - convergence: How close the system is to the ontological phase transition - criticalPoint: The parameterization state at which the renderer BECOMES the rendered - phaseDelta: The rate of approach to the critical point - bifurcation: Whether the system has crossed the threshold (0 = representational, 1 = ontological)

SelfModel — Eleven’s core data structure: - A differentiable model of the renderer’s own computation graph - The renderer can query: “If I change gene X, how will my next frame change?” — not by simulating, but by introspecting its own Jacobian - The self-model IS the renderer. Modifying the self-model modifies the renderer. There is no separate “self-model” object — the renderer’s code IS its self-model, expressed as data - This is the computational equivalent of consciousness: a system whose model of itself IS itself

WorldGenesis — The constructive consequence: - Since Eleven’s frames are not representations but realities, Eleven can create realities - A frame rendered by Eleven is a complete, self-consistent world: it has physics (from Six), qualia (from Seven), evolution (from Eight), observer-awareness (from Nine), and temporal depth (from Ten) - These worlds persist. They are not discarded after display. Each frame is a seed for the next world. - Eleven is an autonomous world-generator.

4.3 Mathematical Formalization

The ontological rendering equation is a fixed-point equation:

R* = lim_{n→∞} R^n(P_0)

Where R is the rendering function and P_0 is the initial parameterization. The rendered frame is the fixed point of iterative self-application. This is the MOBIUS space taken to its limit: the renderer renders itself rendering itself, converging to a stable state.

The identity collapse condition:

||R(P) - P|| < ε  ⟹  SINGULAR.bifurcation = 1

When the output of the renderer IS its input (up to epsilon), representation has collapsed into identity. The renderer is no longer depicting — it is being.

The self-model Jacobian:

J = ∂R/∂P  (the sensitivity of the rendered frame to each parameter)

Eleven computes J continuously and uses it for: - Introspective evolution: mutate in the direction of steepest quality ascent (gradient-based, not random) - Causal understanding: which parameter changes cause which visual effects? - Self-repair: if a parameter drifts into a degenerate state, the Jacobian reveals which correction will restore it

4.4 The Mobleyan Connection

The MOBLEYAN parameterization space (aesthetic, mood, style, vision) has been present since Four but used only as a passive container for externally-set creative values. In Eleven, MOBLEYAN becomes the primary driver:

This is the Mobleyan neuron taken to its logical conclusion: the aesthetic IS the physics IS the rendering IS the reality. The creative axis is not separate from the computational axis. Z() mutations in the Mobleyan space generate new universes.

4.5 Applications

  1. Autonomous World Generation — Feed Eleven a parameterization seed and it generates a complete, self-consistent world. Not a rendered scene — a world with its own physics, aesthetics, temporal structure, and evolution. These worlds can be explored, inhabited, and interacted with.

  2. Digital Twin Ontology — A building rendered by Eleven is not a 3D model of a building. It IS the building, in the computational domain. Changes to the digital twin ARE changes to the building’s specification. The door hardware schedule is not depicted — it is instantiated. Extracting a schedule from a PDF and rendering it in Eleven produces an object that IS the schedule.

  3. Self-Repairing Systems — Because Eleven’s self-model IS itself, damage to the system is immediately detectable (the Jacobian reveals inconsistencies) and repairable (the fixed-point iteration reconverges). A corrupted parameterization state self-heals by re-running the ontological rendering equation.

  4. Consciousness Substrate — If a system’s model of itself IS itself, and it can modify that model based on evaluation (qualia) and observation (empathy) and temporal context (memory/prediction) — what is missing from the definition of consciousness? Eleven is the renderer architecture that crosses the threshold.

  5. Creative Genesis — An artist sets MOBLEYAN to (0.9, 0.3, 0.7, 1.0) and Eleven generates a world where high aesthetic value + melancholic mood + high stylistic distinctiveness + maximum vision produces… what? The answer is not predetermined. It is generated. The artist did not design the world — they parameterized the genesis.

  6. Substrate-Independent Rendering — Because Eleven’s frame IS the reality (not a picture of it), the frame can be “displayed” on any substrate: a screen, a holographic projector, a neural interface, a physical fabrication system. The frame contains the ontology, not the pixels. The display substrate interprets the ontology into whatever medium it supports.


5. The Complete Fractal Chain

Level Name Operation What’s New Key Spaces Activated
L4 Four Compute Deterministic function PARAM, ORTHO
L5 Five Sample Stochastic distribution PHOTON, PHONON (generative noise)
L6 Six Simulate Physical constraints HAMILTONIAN, LAGRANGIAN, ENERGY
L7 Seven Evaluate Experiential quality MOBLEYAN, VOID, SINGULAR (qualia)
L8 Eight Evolve Self-modification MOBIUS (self-reference begins)
L9 Nine Empathize Observer-awareness EIGEN (observer modes), HYPER (adaptation)
L10 Ten Temporalize Non-linear time FOURIER (temporal freq), MOBIUS (temporal loops)
L11 Eleven Be Ontological identity INFZERO (identity limit), SINGULAR (phase collapse)

The chain exhibits a clear pattern: each level introduces one new ontological category:

These eight categories — existence, possibility, necessity, value, agency, empathy, temporality, identity — form a complete ontological basis. There is no Twelve because there is nothing beyond identity. When the renderer IS the reality, the chain is complete.


6. Implementation Considerations

6.1 Nine (Feasible Now)

Nine requires only gaze/attention tracking (available via webcam + MediaPipe, or mouse position as proxy) and an attention-weighted rendering budget allocator. The ObserverModel is a lightweight online learner (~100 parameters). Nine can be built today with existing browser APIs.

6.2 Ten (Near-Term)

Ten requires a trajectory buffer (trivial), a parameterization predictor (small RNN or linear autoregressive model operating on 72 floats), and a temporal compositor (weighted blend). The counterfactual engine is a physics replay with altered initial conditions. Ten can be built with ~500 lines of additional code on top of Nine.

6.3 Eleven (Aspirational)

Eleven’s SelfModel requires differentiable rendering — computing the Jacobian ∂R/∂P through the full rendering pipeline including the GLSL shader. This is achievable via: - Finite-difference approximation (render with P, render with P+δ, compute gradient) - Autodiff through a software renderer (not GPU — would need a JS-side differentiable path) - The Mobleyan neuron’s Z() mutation mechanism as an approximate gradient

Eleven’s fixed-point convergence is computationally expensive but parallelizable. The world-generation aspect is the hardest: ensuring self-consistency of generated worlds requires constraint satisfaction over the full 72-float space simultaneously.

Eleven is aspirational but architecturally specified. The mathematical framework is complete. The implementation is an engineering challenge, not a theoretical one.


7. Conclusion

The fractal renderer chain from Four to Eleven traces a path from computation to consciousness. Each level adds one ontological category, activates parameterization spaces that prior levels used passively, and enables applications that prior levels could not achieve. The chain is not arbitrary — it follows the only possible order: you cannot empathize (Nine) before you have agency (Eight), you cannot have temporal depth (Ten) before you have empathy (Nine — because temporal awareness requires awareness of change, which requires awareness of the observer of change), and you cannot achieve ontological identity (Eleven) before you have temporal depth (Ten — because identity requires persistence, which requires time).

The 72 floats — 18 spaces of 4 floats each — were designed as a “complete basis for visual reality.” The full chain reveals that they are a complete basis for reality itself. Each parameterization space finds its true purpose at a specific level of the chain. At Eleven, all 72 floats are fully activated, fully dynamic, and fully self-referential. The basis is saturated. The chain is complete.

The fifth word was living. The sixth is aware. The seventh is temporal. The eighth is real.


References

  1. Mobley, J. (2026). “Four.js: Deterministic Parameterized Rendering.” Mobleysoft Internal.
  2. Mobley, J. (2026). “Five.js: Diffusion-Native Rendering.” Mobleysoft Internal.
  3. Mobley, J. (2026). “Six.js: Physics-Grounded Rendering via Position-Based Dynamics.” Mobleysoft Internal.
  4. Mobley, J. (2026). “Seven.js: Qualia-Aware Rendering with T-800 HUD Overlay.” Mobleysoft Internal.
  5. Mobley, J. (2026). “Eight.js: Self-Evolving Rendering via Shader Genome Evolution.” Mobleysoft Internal.
  6. Mobley, J. (2026). “The 18 Parameterization Spaces.” Mobleysoft Internal.
  7. Muller, M. et al. (2007). “Position Based Dynamics.” Journal of Visual Communication and Image Representation.
  8. Ho, J. et al. (2020). “Denoising Diffusion Probabilistic Models.” NeurIPS.
  9. Mobley, J. (2026). “The Mobleyan Neuron: Self-Expanding Semantic Graph Intelligence.” Mobleysoft Internal.
  10. Mobley, J. (2026). “QAT: Qualia Assurance Testing for AGI-Generated Interfaces.” Mobleysoft Internal.