Current whole brain emulation (WBE) research overwhelmingly pursues a post-mortem scan-and-reconstruct paradigm: preserve the brain after death, scan it at nanometer resolution, and computationally reconstruct its function from static structural data. This paper argues that this paradigm suffers from fundamental limitations that cannot be overcome by incremental improvements in scanning resolution or computational power alone. We propose an alternative framework—Longitudinal Neurodynamic Capture (LNC)—which builds a functional brain model progressively over the subject's lifetime by continuously recording multimodal biological, behavioral, and contextual data streams. We ground this framework in recent advances in neural dust sensor technology, connectomic mapping, digital twin brain simulation, and multimodal sensor fusion. We further propose three novel conjectures: (1) that distributed nanosensor swarms ("hive architectures") can provide the continuous in-vivo recording resolution required; (2) that faithful reproduction does not require temporal completeness because human cognitive states are inherently non-stationary; and (3) that all internal mental states have observable physical correlates accessible through multi-penetrative AI-driven sensing. We identify the open engineering challenges for each conjecture and propose a research agenda to address them.
Keywords: whole brain emulation, mind uploading, longitudinal capture, neural dust, connectomics, digital twin, nanosensors, consciousness preservation, neurodynamic modeling
The prospect of whole brain emulation—creating a computational system that faithfully reproduces the functional behavior of a specific human brain—has been discussed in the scientific and philosophical literature for decades (Sandberg & Bostrom, 2008; Hayworth, 2010). The field received a comprehensive reassessment in the State of Brain Emulation Report 2025, a collaborative effort by over 45 expert contributors from institutions including MIT, UC Berkeley, the Allen Institute, Harvard, and Google (Zhu et al., 2025). That report identifies three essential capabilities: neural dynamics recording, connectomics mapping, and computational modeling.
Despite significant progress in each area, the dominant paradigm remains fundamentally retrospective: it assumes the brain will be preserved (via cryopreservation, chemical fixation, or plastination) and subsequently scanned to extract its structure. The functional dynamics—the electrochemical, neuromodulatory, and temporal patterns that constitute the living mind—are inferred from this static structure.
This paper challenges that assumption and proposes an alternative: rather than capturing the brain at a single moment, capture the person living their life over decades, building the emulation model from a continuously accumulating multimodal dataset. We term this approach Longitudinal Neurodynamic Capture (LNC).
We argue that LNC is not merely a supplementary data source for traditional WBE but a potentially superior primary strategy, for reasons grounded in neuroscience, information theory, and engineering feasibility.
The moment blood flow ceases, the brain's electrochemical state begins to degrade. Membrane potentials collapse within seconds. Neurotransmitter concentrations at synaptic clefts are disrupted within minutes. Ion channel conformational states—which determine a neuron's current excitability and are critical to its functional behavior—are lost irreversibly (Zafar & Schober, 2021).
Aldehyde-stabilized cryopreservation (ASC), developed by McIntyre and Fahy (2015) and recognized by the Brain Preservation Foundation, preserves ultrastructural detail including synapses and cell membranes at electron-microscopic resolution. However, ASC captures structural information—it does not preserve the dynamic electrochemical state. The technique was designed for morphological preservation, not functional state capture (Brain Preservation Foundation, 2024; McKenzie et al., 2024).
A structural scan, however detailed, provides a single frozen frame. Attempting to reconstruct brain dynamics from this is analogous to inferring the rules of chess from a photograph of a board mid-game—possible in principle with sufficient auxiliary knowledge, but profoundly underdetermined.
The MICrONS project (Allen Institute, 2025) successfully mapped a cubic millimeter of mouse visual cortex at synaptic resolution: 200,000 cells, 523 million synapses, 4 kilometers of axons, producing 1.6 petabytes of data. This landmark achievement, published across ten Nature papers, represents seven years of work by over 150 scientists. Yet even this extraordinary dataset captures structure only—not the pattern of activity that flowed through those synapses during the animal's life (Allen Institute, 2025; Tavakoli et al., 2025).
The human brain contains approximately 86 billion neurons connected by an estimated 100-150 trillion synapses (Azevedo et al., 2009; Herculano-Houzel, 2009). Scanning the full human brain at the resolution achieved by MICrONS would produce on the order of exabytes of raw data and require computational resources that exceed current capabilities by several orders of magnitude (Zhu et al., 2025).
LNC inverts the WBE strategy. Instead of capturing the brain's state at one moment and inferring its dynamics, LNC captures the dynamics directly over time and builds a model that reproduces them.
The foundational insight is that the brain is not a static object but a process—a trajectory through a high-dimensional state space. A person is not defined by their neural configuration at any instant but by the patterns of transformation that configuration undergoes over time. LNC captures these patterns of transformation directly.
This approach is grounded in dynamical systems theory. A dynamical system can be characterized either by its state at a point in time plus its governing equations, or by a sufficiently long trajectory through its state space from which the governing equations can be inferred (Takens, 1981; Sauer et al., 1991). The Takens embedding theorem demonstrates that the dynamics of a system can be reconstructed from time-series observations of a subset of its variables, provided the observation window is sufficiently long and the sampling rate is adequate.
LNC proposes the integration of multiple continuous data streams:
Neurophysiological streams: Continuous brain activity recording (currently EEG/fNIRS; future: implanted nanosensors); neurotransmitter and neuromodulator profiling; sleep architecture and memory consolidation patterns; hormonal and neuroendocrine dynamics.
Behavioral/Cognitive streams: Decision-making patterns with full contextual annotation; emotional responses indexed by physiological correlates (HRV, GSR, pupil dilation); creative output with process documentation; conversational and linguistic patterns across extended timescales; problem-solving strategies and reasoning chains.
Environmental/Contextual streams: Sensory environment (visual, auditory, olfactory); social interaction dynamics; life events and their temporal relationship to behavioral changes.
| Dimension | Post-Mortem Scan | Longitudinal Capture |
|---|---|---|
| Temporal information | Single timepoint | Decades of dynamics |
| Causal relationships | Must be inferred | Directly observed |
| Electrochemical state | Lost at death | Captured in-vivo |
| Learning/adaptation rules | Invisible | Observable across time |
| Redundancy | Single measurement | Thousands of observations |
The resolution gap can be bridged by distributed networks of minimally invasive nanosensors—a "hive architecture"—deployed throughout the brain and communicating collectively to relay high-resolution neural data to external receivers. Foundation: UC Berkeley neural dust (Seo et al., 2016); Ghanbari (2024) multi-access protocols.
A human cognitive system is inherently non-stationary—it is never faithful to any fixed state. Therefore, faithful reproduction requires fidelity to the subject's state at any given moment, not coverage of all possible states. The model should capture the person as they are, never conjecturing about future states or extrapolating to untested conditions.
The physical correlates of all mental states are technologically accessible with sufficient multi-modal instrumentation. This extends the physicalist premise to an engineering claim: the current inability to detect all internal states reflects instrumentation limitations, not fundamental unobservability.
Phase 1 (Years 1-3): Multimodal data collection protocol, baseline datasets, initial ML models, validation criteria.
Phase 2 (Years 3-7): Nanosensor miniaturization below 100μm, multi-sensor mesh communication, in-vivo neurotransmitter sensing.
Phase 3 (Years 5-10): Personalized cognitive models, validation against held-out data, transfer learning.
Phase 4 (Years 8-15): Hive architecture in animal models, end-to-end pipeline, safety standards, human pilot studies.
Informed consent, data sovereignty, identity and rights, equity of access, and privacy require interdisciplinary engagement from ethicists, legal scholars, neuroscientists, and the public.
The dominant strategy for whole brain emulation—post-mortem structural scanning followed by computational reconstruction—faces fundamental limitations. We propose Longitudinal Neurodynamic Capture as an alternative paradigm that builds emulation models from multimodal data collected over the subject's lifetime. The technologies required are not yet mature, but their foundations exist: neural dust, connectomic mapping, digital twin brain simulation, and multimodal sensor fusion are all advancing rapidly.
The brain is not a photograph to be taken. It is a symphony to be recorded.
LNC currently captures less structural detail than post-mortem scanning at synaptic resolution; the nanosensor hive remains aspirational. The framework is prospective only—it cannot help those already deceased. Subject compliance introduces selection bias and data gaps. Data engineering at petabyte-to-exabyte scale is unsolved. Validation cannot be fully achieved until emulation technology matures.
Conflict of Interest: The author declares no competing interests.
Funding: This research received no external funding.
Data Availability: No datasets were generated or analyzed. This is a theoretical framework paper.
Full citations: Allen Institute (2025), Azevedo et al. (2009), Brain Preservation Foundation (2024), Buzsáki et al. (2012), Churchland (1986), Codina et al. (2025), Dorkenwald et al. (2024), Eriksson et al. (1998), Ghanbari (2024), Hayworth (2010), Hebb (1949), Herculano-Houzel (2009), Kantz & Schreiber (2004), Logothetis (2008), Lu et al. (2024), McIntyre & Fahy (2015), McKenzie et al. (2024), Robinson et al. (2008), Sandberg & Bostrom (2008), Sauer et al. (1991), Seo et al. (2016), Takens (1981), Tavakoli et al. (2025), Zafar & Schober (2021), Takahashi et al. (2025), Zhao et al. (2024), Zhu et al. (2025).
This paper is published under the pseudonym Karakira to protect the author's identity during the initial peer review and public discourse phase. The author is a Western researcher in their late fifties, currently working independently abroad. With over three decades of professional experience spanning software engineering, distributed systems architecture, applied artificial intelligence, and computational modeling.
Correspondence: contact@karakira.com