The 500-Million-Year-Old Refutation of Artificial Intelligence

A brain structure has persisted for 500 million years across fish, amphibians, reptiles, birds, and mammals. The basal ganglia should tell us something fundamental about intelligence itself. Yet when viewed through the computational lens, it makes no sense at all.
The problem isn't the basal ganglia. The problem is that computation smuggles in an assumption so deeply embedded in the scientific method that we rarely notice it: the God's eye view.
The God's Eye View: What Computation Requires
Computation doesn't just happen. Every computational system—from classical AI to modern deep learning—requires someone standing outside the system who:
Defines what information means: The programmer specifies what states exist, what values represent, what counts as input and output.
Frames the problem space: What are the possible actions? What features are relevant? When do you stop computing?
Sets the goals: What is the system trying to achieve? What does success mean?
Establishes truth conditions: What is true and what is false? What "is" and what "is not"?
This is the God's eye view—the objective standpoint from which the entire system can be comprehended, designed, and judged. It's the view from nowhere, the external perspective that sees and knows the whole.
In traditional computation:
- Information flows because someone designed the flow
- Functions have meaning because someone assigned the meaning
- Input and output make sense because someone framed what counts as valid
- The system "knows" what is true because truth was externally defined
Even machine learning hasn't escaped this. We select the training data (framing relevance), design the architecture (framing possibility), and specify the loss function (framing success). The God's eye view is simply pushed back one level—from explicit programming to implicit framing—but it remains.
This reflects the Anglo-American philosophical tradition perfectly. The grammar of "I am" treats the subject as an object with properties. If persons are objects, then consciousness is just another property to be specified, and intelligence is just computation with sufficient complexity.
The Basal Ganglia's Absurd Architecture
Now consider the basal ganglia. Its core mechanism is disinhibition—inhibition of inhibition.
The thalamus is constantly suppressed. To permit action, the striatum inhibits the inhibitors, releasing the thalamus from suppression. It's a brake that's always on, and to move, you brake the brake.
From the God's eye view, this is absurd. Why not simply excite what you want to activate? Why this elaborate double-negative?
The computational explanation claims this enables "action selection"—multiple actions compete, the winner inhibits the inhibitors most strongly. But this explanation already assumes:
- A pre-defined set of "actions" exists
- A clear goal determines which action is "best"
- A framework for evaluating competing options
- Someone has solved the frame problem
In other words, it assumes the God's eye view. It assumes someone external has already specified what counts as an action, what the organism is trying to achieve, and what information is relevant.
But the basal ganglia has been conserved for 500 million years across organisms that have no external frame-setter. No programmer specified their action spaces or goal functions. No designer determined their relevance criteria.
What You Cannot Know When You're Inside
Here is the fundamental problem: A decentralized subsystem has no access to the God's eye view.
When you are a subsystem operating within a larger system that no one designed, you face radical uncertainty:
You don't know what information means. There's no shared semantic framework. What the cortex represents and what the basal ganglia represents have no guaranteed common ground. They're not exchanging "information" in the computational sense—they're interacting with mutual indefinability.
You don't know what "is." To assert "this is the correct action" requires knowing what "correct" means in the broader context. But you have no access to that context. You cannot step outside to see the whole.
You can only know what "is not." Through your own local history of interaction, you can detect what has failed. You can mark patterns that didn't work. You can learn constraints: "not this."
This is not a limitation to be overcome. This is the fundamental epistemic condition of decentralized systems.
Inhibition as a Necessity
Now the basal ganglia's architecture makes perfect sense—not computationally, but epistemologically.
Inhibition is how a subsystem honestly marks constraints. When a certain disihinibition pattern does not lead to the expected result, it can mark it so that later it will use it less. This is grounded in empirical reality: "this did not work" is something the subsystem actually experienced.
Excitation would be dishonest. To excite means to assert "this is correct," "this is the answer," "this is what should happen." But the subsystem has no epistemic right to make such claims. It doesn't know what "correct" means from the God's eye view it doesn't have access to.
Disinhibition becomes discovery. When you inhibit an inhibition, you're not asserting "this is right"—you're removing a learned constraint. You're saying "this is not known-to-be-wrong." Action emerges not from positive command but from the removal of vetoes.
The organism acts by discovering what remains possible after all learned impossibilities have been marked.
The Frame Problem Cannot Be Solved, Only Dissolved
The frame problem asks: How does a system know what's relevant?
Computation tries to solve this by having someone external specify relevance. But this just pushes the problem back. Who decides what's relevant for the frame-setter?
The basal ganglia dissolves the problem by abandoning the God's eye view entirely:
- There is no pre-defined action space—just continuous interaction
- There is no external goal—just competing local drives that can be adjusted locally
- There is no computation that completes—just ongoing living (enliving)
- There is no truth from outside—just discovery of constraints from within
Relevance isn't solved. It emerges from being-in-the-world, from having stakes, from mattering.
Conclusion: The Lesson of 500 Million Years
The next time you make a decision, notice: your basal ganglia just operated without the God's eye view.
It didn't select from a menu someone designed.
It didn't maximize an externally specified function.
It didn't compute what "is"—it removed constraints about what "is not."
It didn't exchange information with cortex—it interacted with indefinability.
This is not a bug to be fixed with more sophisticated algorithms. This is life operating in genuine decentralization.
The phylosophical tradition from which the scientific method was born gave us computation and its extraordinary powers. But computation requires the God's eye view—the external perspective that frames, defines, and judges.
Life doesn't have this luxury. Evolution doesn't provide it. Decentralized systems cannot access it.
So the brain does something else. Something that looks bizarre from the computational perspective. Something that makes perfect sense from within.
It discovers through negation. It learns constraints. It marks what is not. And from the space that remains—after all impossibilities have been inhibited—action emerges.
Not computed. Discovered.
Not designed from outside. Emerged from within.
Not information flowing through a program. Interaction with indefinability, where subsystems can only honestly say: "Not this. Not that. But perhaps... this remains possible."
The basal ganglia has been trying to tell us this for 500 million years: Intelligence without the God's eye view requires inhibition.
We need Geneosophy to hear what it's been saying.