An experiment inspired by Nick Bostrom’s Superintelligence
In Nick Bostrom’s book, Superintelligence, one of the ideas is that a sufficiently advanced AI might not just improve upon our existing computational paradigms — it might invent entirely new ones. Ones we haven’t thought of yet. Ones that don’t look like chips, transistors, or binary logic at all.
That got me wondering:
Could today’s AI — not a superintelligence, just the models we have now — start sketching out what those new paradigms might look like?
So I ran an experiment.
I asked both ChatGPT and Grok to think seriously about novel approaches to computation (and at the end having them talk directly to each other). I asked them to start with philosophy, but then ground the ideas in physics and mathematics we already understand. I wanted to see what they would come up with – if there was any way to exploit untapped areas for computing purposes.
What came back surprised me. They converged on an idea rooted in topological physics — the same branch of math that earned a Nobel Prize in 2016.
Here’s what they proposed: Topological Field Computation: Programmable Hamiltonians in Metamaterials.
The Big Idea: Topology as a Foundation for Computation
In physics, certain properties are “topologically protected” — meaning they’re robust against small disturbances, imperfections, or noise. Think of the difference between a donut and a sphere: no matter how you stretch or deform a donut, it still has a hole. That hole is a topological invariant. You’d have to tear the thing apart to change it. In quantum and photonic systems, analogous invariants protect certain physical states from falling apart when things get noisy.
For computation, this is a very attractive property. Our current computers spend enormous energy on error correction and precise control precisely because standard physical states are fragile. What if you could build computational states that were inherently hard to disturb?
The AI models proposed using a specific model from condensed matter physics — the Su-Schrieffer-Heeger (SSH) model — implemented in photonic waveguide arrays as a testbed for this idea.
Grok: Towards Topological Field Computation
Grok took a broad, visionary approach. It lays out a full conceptual framework for what it calls topological field computation, built around three core ideas:
Memory as stable curvature. Rather than storing information in a bit that’s either 0 or 1, imagine storing it in the shape of a field — a topological configuration like a vortex or soliton. These configurations are stable not because you’re actively holding them in place, but because the mathematics of the space they live in makes them hard to destroy. Grok calls this “stable curvature in configuration space.”
Addressability as landscape navigation. To read or write information, you don’t flip a switch — you navigate an energy landscape. You tune parameters to move between valid topological states, like hiking between valleys in a terrain. Crucially, if the terrain is topological, the paths between states are more reliable and the states themselves are more distinct.
Intelligence as Hamiltonian sculpting. The most speculative and interesting idea: what if a system could adaptively reshape its own governing equations (its Hamiltonian) in response to feedback? This would be a form of hardware-level learning — not software running on static hardware, but the physical substrate itself adapting.
To test whether any of this is real and not just beautiful theory, Grok proposed a minimum viable demonstration (MVD): build a small SSH photonic lattice in a silicon-on-insulator chip (a mature, manufacturable technology), and compare it against a topologically trivial lattice with identical specs.
The key falsifiable claim:
The topological system should maintain high routing fidelity with fewer control inputs and less energy than the trivial one.
If it doesn’t, the theory needs revision.
Grok also honestly addresses the hard problems — sensitivity to cross-talk between waveguides, partial observability (you can’t watch every part of the system at once), and the challenge of scaling beyond a handful of unit cells.
ChatGPT: Topology as Control Compression
ChatGPT’s approach is narrower and more rigorously experimental in its framing, but it homes in on what may be the most important single question hiding inside Grok’s broader vision.
The central hypothesis:
Topological protection compresses control dimensionality.
Here’s what that means in plain terms: Any real physical computing system drifts. Temperature changes, fabrication imperfections, and environmental noise all push your system away from where you want it. To compensate, you need feedback control — sensors that detect drift and actuators that push back. In most systems, as you add more components, the number of control inputs you need scales up roughly linearly. This becomes a serious bottleneck.
ChatGPT’s asks: does topology break this scaling? If a topological system’s important states are inherently stable, maybe you need far fewer “nudges” to keep the system on track. The approach formalizes this as a control-compression curve: plot routing fidelity against the number of effective control degrees of freedom, and see if the SSH lattice reaches high fidelity (>90%) with a smaller control rank than a matched trivial lattice.
The experimental setup is elegant in its fairness: fabricate both systems on the same chip, subject them to identical drift protocols (±10°C thermal ramps with spatial gradients), and give them identical actuator budgets. Any difference in performance is attributable to topology, not experimental conditions.
ChatGPT also introduces a clean mathematical handle for the comparison: the condition number of the sensitivity matrix — essentially, how efficiently does a unit of control effort translate into a unit of useful correction? A lower condition number means better-conditioned control, less wasted effort, and more stable feedback loops. The prediction is that the topological SSH system should show meaningfully better conditioning.
Importantly, ChatGPT maps out all three possible outcomes honestly:
- If SSH wins: topology is a genuine control-theoretic resource, not just a curiosity.
- If SSH ties: topology helps with static disorder but not dynamic stabilization — still useful, but the stronger claim fails.
- If SSH loses: the advantage only appears in nonlinear regimes, and the linear test is the wrong arena.
This kind of pre-registered, falsifiable structure is exactly what makes a proposal scientifically credible rather than speculative hand-waving.
Where the Two AI Models Agree (and Where They Differ)
Both approaches agree on the core physical setup: SSH lattice in silicon photonics, compared against a trivial baseline, with the key metric being how hard it is to maintain functional behavior under drift. They agree that the interesting question isn’t just “does topology survive disorder” (that’s already known) but “does topology reduce the cost of control.”
Where they differ is in scope and emphasis. Grok builds a grander intellectual framework — the memory/addressability/intelligence triad gestures toward a full paradigm for field-based computation, with implications for neuromorphic hardware, analog optimization, and even quantum simulation. ChatGPT drills down into the control-theoretic machinery, giving you the specific matrices, metrics, and statistical tests you’d need to actually run the experiment and publish the result.
They’re complementary. Grok gives you the why, ChatGPT gives you the how to test it.
Why This Matters
If the core hypothesis holds up experimentally, the implications are significant. Right now, scaling programmable photonic or analog systems is limited not primarily by fabrication — silicon photonics is mature — but by the exploding complexity of the control layer. Every component you add potentially requires new sensors, new actuators, new feedback loops. Topology, if it genuinely compresses control dimensionality, could change that scaling relationship.
That’s not a path to replacing digital computers. But it could be a path to a new class of hardware that’s particularly good at analog optimization, physical neural networks, or robust signal routing — and that does it with substantially less overhead than current approaches.
And at a more philosophical level: this is exactly the kind of thing Bostrom was pointing at. Not magic, not science fiction — but a serious engagement with physical principles we already understand, applied in ways we haven’t fully explored. Today’s AI didn’t invent topological physics. But it did ask a question about it that researchers haven’t fully answered yet.
That seems like a reasonable start – and a big leap from where we were in 2016.


