Starting this thing felt like the kind of decision you can talk yourself out of indefinitely. You either begin, or you keep preparing to begin… So here it is: issue one of Notes from Shahaan.
No particular format yet. Just things I've been reading, jotting in my notes, watching, and arguing with in my head: across AI, tech, venture, and ideas that deserve more careful attention than they're currently getting. I’ve been power-using Obsidian for about a year now, so it’s time I make use of the knowledge system I’ve created.
If something here is useful, pass it on.
Table of Contents
The fly that walked without being born
Something truly strange occurred last week, and I believe it warrants more attention than it has gotten.
A company called Eon Systems released a video of a fruit fly walking, grooming, and feeding in a physics-simulated environment. That sounds unremarkable until you understand what's underneath it. This was not an animation. It was not a reinforcement learning policy trained to approximate fly-like movement. It was an emulation of a real fly's brain, mapped neuron by neuron from electron microscopy data, placed into a simulated body, and left to generate behavior entirely from its own circuit dynamics.
The brain had 125,000 neurons and 50 million synaptic connections, all derived from the FlyWire connectome… the most complete biological wiring diagram ever assembled for any organism. Sensory input flowed in, neural activity propagated through the full connectome, and motor commands flowed out. A simulated body executed the result. Nobody programmed the walking. Nobody trained the grooming behavior. It fell out of the brain's structure itself.
That is a genuine scientific milestone. The first time a complete biological connectome has been embodied in simulation and produced multiple distinct, naturalistic behaviors driven by its own dynamics, closing the full loop from perception to action in a whole-brain emulation. Prior work had either modeled brains without bodies or animated bodies without biological brains. Eon did something different, and the distinction matters.
The internet's response was roughly what you'd expect: genuine wonder, which was warranted, and spectacular overreach, which was not.
The leap that deserves scrutiny
Within hours of the video circulating, the discourse had already arrived at conclusions like "your consciousness is software and someone just proved it can be copy-pasted." The posts accumulated hundreds of thousands of impressions. The underlying logic, stated plainly: the fly brain was emulated; behavior emerged; the human brain is just more neurons; we've gotten good at scaling; therefore, digital immortality is within reach; and the first digital human will be copied from someone already alive.
That chain of reasoning is interesting right up until it falls apart. It falls apart in several places.
Behavior is not consciousness. The emulated fly walked, groomed, and fed. What it did not do, and what no one has demonstrated or knows how to demonstrate, is experience any of it. The hard problem of consciousness is not a branding issue or an engineering bottleneck waiting for more compute. It is a genuine and unresolved philosophical problem about the nature of subjective experience, and the fact that a system produces naturalistic behavior tells you nothing about whether there is something it is like to be that system. Where the line sits between mechanism and experience, or whether such a line exists the way we intuitively imagine it does, remains genuinely open.
Consciousness is not a file. The framing of "copy-paste your consciousness" assumes that what makes someone who they are is a static, capturable structure. But consciousness, to whatever extent it can be studied and described, is a continuous process, shaped by lived experience, by the particular history of a particular brain, by neurochemical dynamics happening in real time, and by decades of development that have wired and rewired connections in ways that reflect an entire life rather than a single structural snapshot. What a connectome captures, even a maximally precise one, is the architecture at a point in time. Whether that architecture, separated from its biological substrate and its history, preserves whatever we mean by a person's identity is a question that gets harder under examination, not easier.
The scale gap is being glossed over. The fruit fly brain has roughly 140,000 neurons. The human brain has approximately 86 billion. That is six orders of magnitude, and it is not a rounding difference. It introduces qualitatively new problems at every stage of the work. Neurons alone don't tell the full story either — the brain's complexity also involves glial cells, neurochemical systems operating across many timescales, and a developmental history spanning decades that shapes connectivity in ways no static connectome can fully represent. "We've gotten good at scaling" is true in certain well-defined engineering contexts. It is not a general-purpose argument redeemable against every hard problem in biology and philosophy at once.
None of this diminishes what Eon actually accomplished. The precision of the science deserves a more precise response than it's getting.
What is actually worth taking seriously
For one, the work establishes whole-brain emulation as a credible scientific track. The dominant AI paradigm of the last decade is essentially behavioral: observe outputs at scale, optimize toward them, and let capability emerge from the optimization. Eon started from the opposite direction… modeling what an organism is at the level of its complete neural architecture, then asking whether behavior falls out from the structure itself. The fact that it emerged without any training signal pointing toward it is a meaningful data point, and it suggests that whole-brain emulation deserves treatment as a serious scientific program rather than a philosophical thought experiment.
The work also points toward a category of companies most investors aren't structuring around yet. Eon's approach sits at the intersection of connectomics, high-fidelity simulation infrastructure, and embodied neuroscience. It doesn't fit biotech as traditionally practiced, which is primarily oriented toward pharmaceutical and therapeutic applications. It's not AI in the sense most venture portfolios currently organize around, and it isn't robotics in any conventional sense. The companies that build serious infrastructure here will be doing work with implications that stretch from neurological disease modeling and drug discovery… where a high-fidelity brain simulation opens research approaches that don't currently exist… to the harder, longer-horizon questions about the nature of mind, which are currently being answered poorly by people with large followings.
And underneath all of it sits a sharper question about what intelligence actually is. Large language models are trained on the behavioral outputs of intelligence rather than on intelligence itself. They've learned from what minds produce without being grounded in how minds work. Whole-brain emulation is asking something categorically different: whether, if you reconstruct the structure with sufficient fidelity, the behavior… and perhaps eventually something more… will follow from the structure itself. Whether that approach converges with or diverges from the LLM paradigm over time, whether they're pointing at the same thing from different angles or toward genuinely different destinations, is one of the more interesting open questions in the field right now. I don't think enough serious people are engaging with it.
What this actually is
A small company demonstrated a proof of concept that closes a long-standing gap in neuroscience. For the first time, a complete biological brain, derived from a real organism through connectomic mapping, was embodied in simulation and produced naturalistic behavior from its own circuit dynamics, without training and without anyone programming the outputs. That is a new thing in the world, and it opens a research agenda that was previously more speculative than tractable.
What it is not, at least not yet, is a demonstration that consciousness is software, that identity can be preserved through structural copying, or that the distance between a fly and a human is merely an engineering problem to be solved at the next GPU cluster. Those claims require considerably more than the work has shown, and treating them as settled forecloses the very questions that make this development worth careful thought.
The version of this story worth telling takes the science seriously on its own terms, acknowledges what it actually proves, and sits honestly with what it doesn't. That version is more interesting than the one circulating right now, and it will age considerably better.
Watching
A few things I’ve been tracking this past week:
Origin pilot
Last month, China quietly open-sourced the world's first quantum operating system. The company is Origin Quantum Computing Technology, based in Hefei; the OS is called Origin Pilot. It runs on superconducting qubits, trapped ions, and neutral atoms… the three major competing quantum platforms, all in one system. Most quantum frameworks from Western companies (IBM's Qiskit, Google's Cirq) live in the cloud. Origin Pilot is locally deployable and free to download. The hardware behind it has already run over 339,000 jobs for users in 120 countries. This is the DeepSeek playbook applied to quantum infrastructure: release free, build the ecosystem, set the architectural defaults before anyone else does. Whether China's underlying hardware is ahead of the West's is almost beside the point. Infrastructure wins by becoming infrastructure.
The thinking gap
A pattern I keep noticing: AI is making certain people significantly more capable, and making others more dependent on outputs they can't evaluate. A recent Fortune survey found that the biggest concern among Fortune 500 executives isn't AI, but a critical-thinking gap. When you ask a tool for an answer, and you have no framework for judging whether it's right, the tool isn't helping you think. It's thinking for you. The people who get the most from AI already know how to structure problems, apply decision frameworks, think in systems, and interrogate assumptions. Design thinking, theory of change, first-principles reasoning… these aren't soft skills anymore. They're what determines whether you're directing the model or being gradually displaced by it.
Gemini embedding 2
Google released its first natively multimodal embedding model this week. Text, images, video, audio, and documents are all mapped into the same embedding space, so you can do retrieval and similarity search across modalities without translating between them. Beyond the feature list, what I find interesting is what it signals: the abstraction layer for how AI "understands" media is consolidating, and whoever controls that layer controls a lot of what gets built on top of it. It went into public preview two days ago. This one will be quietly foundational.
A thought on regret
I've been sitting with a Kierkegaard poem:
“Marry, and you will regret it; don’t marry, you will also regret it; marry or don’t marry, you will regret it either way. Laugh at the world’s foolishness, you will regret it; weep over it, you will regret that too; laugh at the world’s foolishness or weep over it, you will regret both (…) This, gentlemen, is the essence of all philosophy.”
It's been circulating lately. I think it's an interesting observation, but it's also slightly wrong… or incomplete, at least.
The poem works as a description of anxiety. It's less convincing as a philosophy. Regret requires a self that stands outside its choices and judges them against some imagined alternative. I'm not sure that the self is as stable or reliable as the framing assumes. And I don't believe in regret as a governing principle.
The existentialist correction is simpler: you are your choices. There is no neutral position, no version of you that exists apart from what you've decided. The question isn't "how do I avoid regretting this?" It's "what kind of person do I want to have been?" Choose deliberately. Own what you chose. The regret problem mostly dissolves once you stop pretending there was a better option waiting if only you'd picked differently.
Kierkegaard was describing the trap. Existentialism is the way out.
That’s all for issue one, folks!
Tune into your inbox next Friday for more Notes from Shahaan!
