Computation is not the right framework to comprehend the concept of life and intelligence

Computation is not the right framework to comprehend the concept of life and intelligence
Computation is limited.Computation is limited.

We live in a computational age. When we want to formalize our thoughts—to express ideas precisely, to build models of intelligence, to create systems that reason—we almost instinctively reach for computation. The Church-Turing thesis tells us that any "effectively computable" function can be computed by a Turing machine, and this has become so foundational to computer science and AI that we often treat computation as the way to formalize thought itself.

But what if computation isn't the only possibility? What if there are other ways to formalize thought by manipulating symbols that don't fit the computational paradigm? Ways that would allow us to formalize some aspects of our thoughts that we cannot currently, such as the concepts of autonomous creativity that we assign to the concepts of life and intelligence.

To explore this question, we need to look beneath the surface of what computation actually is—and discover something we've been taking for granted all along.

The Standard View: Computation as Symbol Manipulation

Ask most people what computation is, and they'll describe something like this: the mechanical manipulation of symbols according to fixed, unambiguous rules. A Turing machine reads symbols, follows deterministic rules about what to write and where to move, and transforms inputs into outputs. Even modern neural networks, despite their learned rather than hand-coded rules, still operate through rule-following behavior during execution.

This view focuses on what I'll call the secondary meaning of symbols—what they represent. In computation, symbols might represent:

  • Numbers and quantities
  • Instructions and data
  • States of the world
  • Anything we choose to encode

We design our algorithms, write our programs, and train our models with these secondary meanings in mind. We think: "This variable represents temperature, this function computes velocity, this neural network recognizes cats."

But there's something deeper happening that we rarely examine.

The Primary Meaning: What Makes Computation Possible

Before any symbol can represent temperature or encode an instruction, it must have what we might call a primary meaning—the fundamental properties that make computational manipulation possible in the first place.

Consider what must be true for computation to work:

Symbols must have a meaning of being discrete and separated. Each symbol is a distinct, individuated object with clear boundaries. A "0" is absolutely distinguishable from a "1." There's no ambiguity, no gradation. This discreteness is what allows rules to reliably identify "which symbol is this?" and operate accordingly.

Symbols must have a meaning to allow structured relations. They don't float in isolation—they have positions, sequences, spatial arrangements. One symbol is "next to" another, "before" or "after" it, "at address X." These relational structures are what allow rules to reference and manipulate groups of symbols systematically.

Symbols must have a meaning to allow spatial/sequential ordering. Whether it's the linear tape of a Turing machine, the memory addresses in a computer, or the sequential layers of a neural network, symbols exist in some form of ordering that rules can traverse and reference.

These properties—discreteness, relational structure, spatial ordering—are the primary meanings that computational symbols must possess. They aren't secondary interpretations we layer on top; they're the foundational ontology that makes rule-based manipulation possible.

The Unity of Rules and Primary Meaning

Here's the crucial insight: computational rules and primary meanings form an inseparable unity. The rules apply to those specific primary meanings.

Consider a simple computational rule: "If the symbol at position X equals '1', write '0' at position Y." This rule depends entirely on the primary meanings:

  • Discreteness (recognizing that something "equals '1'")
  • Position (referencing locations X and Y)
  • Relational structure (addressing specific symbols)
  • Sequential operation (current state → next state)

The rules don't just happen to operate on symbols with these properties—the rules can only operate on symbols that have computational primary meaning. You cannot have computational rules without discrete-objects in a specific structured environment, and these objects exist precisely to be rule-manipulable.

A New Definition of Computation

This leads us to a cleaner, more fundamental definition:

Computation is the manipulation of discrete objects in structured environments through the assignment of specific primary and secondary meaning to symbols.

Once you establish:

  1. Discrete, separated objects that allow for unique secondary meaning to be assigned
  2. Structured relations between them
  3. Rules that operate on these primary meanings

Then you can assign whatever secondary meaning you want to that specific kind of symbol manipulation.

This definition captures something the Church-Turing thesis leaves implicit: computation doesn't just require rules and algorithms; it requires a specific primary ontology of objects for computational manipulation to be possible.

Is This a Limitation?

For most of the history of computer science, we've taken this ontology for granted. And for good reason—it works spectacularly well for countless applications. Physics simulations, database management, image processing, language translation: all rely on this discrete-relational-spatial foundation.

But some might suspect this is actually a limitation.

Consider the challenges we face when trying to capture certain phenomena computationally:

Autonomous creativity. We can program systems to generate novel combinations, to optimize over learned patterns, to sample from probability distributions. But is this genuine creative autonomy, or just sophisticated rule-following with stochastic elements?

Holistic understanding. We decompose problems into discrete steps, break knowledge into separate facts, represent concepts as vectors in spaces. But does this discrete decomposition capture how understanding might work holistically?

Continuous adaptation. We train models, then deploy them. We can add online learning, but it follows computational rules. What about systems that evolve continuously, not through discrete update steps but through intrinsic dynamics?

Event-driven intelligence. Neuromorphic engineers building event-based cameras and spiking neural networks often find themselves converting asynchronous events back into frames, imposing computational structure on naturally event-driven processes. Why? Because our algorithms, our training methods, our entire computational toolkit requires the discrete-structured foundation.

These aren't failures of computation—they're indications that computation, defined by its primary meanings, might have limitations.

The Takeaway

We treat computation as synonymous with formal reasoning itself. The Church-Turing thesis seems to tell us that anything computable is Turing-computable, and we quietly extend this to mean anything thinkable “should” be computational.

But computation rests on a foundation we rarely examine: the primary meanings of discrete separation, relational structure, and spatial ordering. These aren't neutral properties that any formal system would have—they're the specific ontology that makes rule-based manipulation possible.

Recognizing this foundation doesn't diminish computation. For the vast majority of problems—physics, engineering, logic, mathematics, data processing—the computational ontology is exactly what we need. Discrete objects in structured environments are perfect for building bridges, balancing accounts, and simulating physical systems.

But for phenomena involving autonomous creativity, holistic understanding, or genuine open-ended evolution? Perhaps we've been assuming computation is the only game in town when it's actually one specific formalism grounded in specific primary meanings.

The question isn't whether computation works—it demonstrably does. The question is: can we devise a new formalism that would allow symbol manipulations based on new primary meanings? That is where Geneosophy comes in.

Read more

.