Tag: Artificial Intelligence

  • What If AI Imagined a New Way to Compute? I Asked It To.

    What If AI Imagined a New Way to Compute? I Asked It To.

    An experiment inspired by Nick Bostrom’s Superintelligence

    In Nick Bostrom’s book, Superintelligence, one of the ideas is that a sufficiently advanced AI might not just improve upon our existing computational paradigms — it might invent entirely new ones. Ones we haven’t thought of yet. Ones that don’t look like chips, transistors, or binary logic at all.

    That got me wondering:

    Could today’s AI — not a superintelligence, just the models we have now — start sketching out what those new paradigms might look like?

    So I ran an experiment.

    I asked both ChatGPT and Grok to think seriously about novel approaches to computation (and at the end having them talk directly to each other). I asked them to start with philosophy, but then ground the ideas in physics and mathematics we already understand. I wanted to see what they would come up with – if there was any way to exploit untapped areas for computing purposes.

    What came back surprised me. They converged on an idea rooted in topological physics — the same branch of math that earned a Nobel Prize in 2016.

    Here’s what they proposed: Topological Field Computation: Programmable Hamiltonians in Metamaterials.

    The Big Idea: Topology as a Foundation for Computation

    In physics, certain properties are “topologically protected” — meaning they’re robust against small disturbances, imperfections, or noise. Think of the difference between a donut and a sphere: no matter how you stretch or deform a donut, it still has a hole. That hole is a topological invariant. You’d have to tear the thing apart to change it. In quantum and photonic systems, analogous invariants protect certain physical states from falling apart when things get noisy.

    For computation, this is a very attractive property. Our current computers spend enormous energy on error correction and precise control precisely because standard physical states are fragile. What if you could build computational states that were inherently hard to disturb?

    The AI models proposed using a specific model from condensed matter physics — the Su-Schrieffer-Heeger (SSH) model — implemented in photonic waveguide arrays as a testbed for this idea.

    Grok: Towards Topological Field Computation

    Grok took a broad, visionary approach. It lays out a full conceptual framework for what it calls topological field computation, built around three core ideas:

    Memory as stable curvature. Rather than storing information in a bit that’s either 0 or 1, imagine storing it in the shape of a field — a topological configuration like a vortex or soliton. These configurations are stable not because you’re actively holding them in place, but because the mathematics of the space they live in makes them hard to destroy. Grok calls this “stable curvature in configuration space.”

    Addressability as landscape navigation. To read or write information, you don’t flip a switch — you navigate an energy landscape. You tune parameters to move between valid topological states, like hiking between valleys in a terrain. Crucially, if the terrain is topological, the paths between states are more reliable and the states themselves are more distinct.

    Intelligence as Hamiltonian sculpting. The most speculative and interesting idea: what if a system could adaptively reshape its own governing equations (its Hamiltonian) in response to feedback? This would be a form of hardware-level learning — not software running on static hardware, but the physical substrate itself adapting.

    To test whether any of this is real and not just beautiful theory, Grok proposed a minimum viable demonstration (MVD): build a small SSH photonic lattice in a silicon-on-insulator chip (a mature, manufacturable technology), and compare it against a topologically trivial lattice with identical specs.

    The key falsifiable claim:

    The topological system should maintain high routing fidelity with fewer control inputs and less energy than the trivial one.

    If it doesn’t, the theory needs revision.

    Grok also honestly addresses the hard problems — sensitivity to cross-talk between waveguides, partial observability (you can’t watch every part of the system at once), and the challenge of scaling beyond a handful of unit cells.

    ChatGPT: Topology as Control Compression

    ChatGPT’s approach is narrower and more rigorously experimental in its framing, but it homes in on what may be the most important single question hiding inside Grok’s broader vision.

    The central hypothesis:

    Topological protection compresses control dimensionality.

    Here’s what that means in plain terms: Any real physical computing system drifts. Temperature changes, fabrication imperfections, and environmental noise all push your system away from where you want it. To compensate, you need feedback control — sensors that detect drift and actuators that push back. In most systems, as you add more components, the number of control inputs you need scales up roughly linearly. This becomes a serious bottleneck.

    ChatGPT’s asks: does topology break this scaling? If a topological system’s important states are inherently stable, maybe you need far fewer “nudges” to keep the system on track. The approach formalizes this as a control-compression curve: plot routing fidelity against the number of effective control degrees of freedom, and see if the SSH lattice reaches high fidelity (>90%) with a smaller control rank than a matched trivial lattice.

    The experimental setup is elegant in its fairness: fabricate both systems on the same chip, subject them to identical drift protocols (±10°C thermal ramps with spatial gradients), and give them identical actuator budgets. Any difference in performance is attributable to topology, not experimental conditions.

    ChatGPT also introduces a clean mathematical handle for the comparison: the condition number of the sensitivity matrix — essentially, how efficiently does a unit of control effort translate into a unit of useful correction? A lower condition number means better-conditioned control, less wasted effort, and more stable feedback loops. The prediction is that the topological SSH system should show meaningfully better conditioning.

    Importantly, ChatGPT maps out all three possible outcomes honestly:

    • If SSH wins: topology is a genuine control-theoretic resource, not just a curiosity.
    • If SSH ties: topology helps with static disorder but not dynamic stabilization — still useful, but the stronger claim fails.
    • If SSH loses: the advantage only appears in nonlinear regimes, and the linear test is the wrong arena.

    This kind of pre-registered, falsifiable structure is exactly what makes a proposal scientifically credible rather than speculative hand-waving.

    Where the Two AI Models Agree (and Where They Differ)

    Both approaches agree on the core physical setup: SSH lattice in silicon photonics, compared against a trivial baseline, with the key metric being how hard it is to maintain functional behavior under drift. They agree that the interesting question isn’t just “does topology survive disorder” (that’s already known) but “does topology reduce the cost of control.”

    Where they differ is in scope and emphasis. Grok builds a grander intellectual framework — the memory/addressability/intelligence triad gestures toward a full paradigm for field-based computation, with implications for neuromorphic hardware, analog optimization, and even quantum simulation. ChatGPT drills down into the control-theoretic machinery, giving you the specific matrices, metrics, and statistical tests you’d need to actually run the experiment and publish the result.

    They’re complementary. Grok gives you the why, ChatGPT gives you the how to test it.

    Why This Matters

    If the core hypothesis holds up experimentally, the implications are significant. Right now, scaling programmable photonic or analog systems is limited not primarily by fabrication — silicon photonics is mature — but by the exploding complexity of the control layer. Every component you add potentially requires new sensors, new actuators, new feedback loops. Topology, if it genuinely compresses control dimensionality, could change that scaling relationship.

    That’s not a path to replacing digital computers. But it could be a path to a new class of hardware that’s particularly good at analog optimization, physical neural networks, or robust signal routing — and that does it with substantially less overhead than current approaches.

    And at a more philosophical level: this is exactly the kind of thing Bostrom was pointing at. Not magic, not science fiction — but a serious engagement with physical principles we already understand, applied in ways we haven’t fully explored. Today’s AI didn’t invent topological physics. But it did ask a question about it that researchers haven’t fully answered yet.

    That seems like a reasonable start – and a big leap from where we were in 2016.

  • The Acceleration of Artificial Intelligence

    Sometime over the last 20 years artificial intelligence has gone from something we laughed about and feared in movies to something that people are now devoting their lives to and view it as a salvation for (or the destruction of) mankind.

    Tim Urban at Wait but Why has already written a tome on “The AI Revolution” (Part 1 and 2), which is a very good primer on AI. For a slightly longer read, consider Nick Bostrom’s Superintelligence: Paths, Dangers, Strategies. This post is neither.

    This post is meant to describe the acceleration of artificial intelligence development as seen through various news updates and published papers. In the process I hope to convey to you, the reader, how close we are to artificial general intelligence (AGI).

    What is Artificial Intelligence?

    For this post, we are limiting the topic to AGI, which Tim Urban describes as, “a computer that is as smart as a human across the board—a machine that can perform any intellectual task that a human being can.” The problem is that we may not have a good way of quantifying AGI. It’s as if once we’ve achieved any new threshold in AI, humans tend to move the bar higher.

    Once there is a program in existence that does the job, you are inclined to think it’s merely a formula and that isn’t thinking. The very success of me producing a program which exhibits thinking causes you to deny that that can be thinking.” -Dr. Richard W. Hamming (1915-1998) in a lecture from “The Art of Doing Science and Engineering: Learning to Learn: Artificial Intelligence – Part I” (April 7, 1995)

    Peter H. Diamandis and Steven Kotler echo this sentiment in their book, Bold: How to Go Big, Create Wealth and Impact the World, which covers artificial intelligence as “exponential technology”. It’s not just this human bias that hides artificial intelligence’s acceleration, sometimes AI gets written about by other names such as “natural computing”:

    Researchers at UCLA and the National Institute for Materials Science in Japan have developed a method to fabricate a self-organized complex device called an atomic switch network that is in many ways similar to a brain or other natural or cognitive computing device…The device we have created is capable of rapidly generating self-organization in a small chip with high speed…Experiments demonstrated that the atomic switch network exhibits emergent behavior…has the potential to process information at very high rates…We plan to move towards a hybrid morphic system using the best of conventional computation with our brain-like device capabilities…This would be a radical step in the real development of AI.” –Scientists develop atomic-scale hardware to implement natural computing, May 13, 2015 by Lisa Zyga

    “Emergent behavior” is when a number of simple entities (agents) operate in an environment, forming more complex behaviors as a collective. This is similar to cellular automata that Stephan Wolfram describes in A New Kind of Science.

    “RMIT University researchers have mimicked the way the human brain processes information with the development of an electronic long-term memory cell. Researchers at the MicroNano Research Facility (MNRF) have built the one of the world’s first electronic multi-state memory cell which mirrors the brain’s ability to simultaneously process and store multiple strands of information. The development brings them closer to imitating key electronic aspects of the human brain — a vital step towards creating a bionic brain.” –Nano memory cell can mimic the brain’s long-term memory, May 12, 2015

    Sometimes, narrow AI technology is created that works in ways that the creator did not intend or does not understand. “He can’t really explain all the reasons it does what it does. It’s started making decisions on its own,” which is a quote from George Hotz in “The First Person to Hack the iPhone Built a Self-Driving Car. In His Garage; George Hotz is taking on Google and Tesla by himself” by Ashlee Vance.

    “He’d devoured the cutting-edge AI research and decided the technology wasn’t that hard to master. Hotz took a job at Vicarious, a highflying AI startup, in January to get a firsthand look at the top work in the field, and this confirmed his suspicions. “I understand the state-of-the-art papers,” he says. “The math is simple. For the first time in my life, I’m like, ‘I know everything there is to know.’” -George Hotz

    The Economics of Artificial Intelligence

    And then there is the economic aspect of AI. More narrow AI such as those used in driverless cars may have the need to buy gas (or electricity) and get maintenance. It can even make money for it’s owner by making deliveries or driving people around.

    The self-driving car then, calculating it has approximately 3.5 hours before it will be required by one of its owners again, logs in to Uber and makes itself available for a 3-hour block as a self-driving resource. It is immediately called out to a pickup, and after 3 hours has earned $180 in fees, which it puts away in its wallet.” -“The Death of Bank Products has been greatly under-exaggerated” by Brett King, an excerpt from the book, Augmented: Life in the Smart Lane.

    What happens when AI starts making money, needs it’s own bank account and credit cards, and needs to be able to make transactions on it’s own? With artificial intelligence, that future is sooner than you might think.

    It’s theoretically possible to write an autonomous self-funded application on Ethereum that earns money to pay for its own execution, or rather its own existence. It might create value by enabling new kinds of markets, for example. Artificial intelligence might help optimize the value it delivers to ensure its own survival. In this case, it’s not just the network that’s unstoppable, but an autonomous agent operating within the network. Cue Skynet joke.” -“Ethereum: Rise of the World Computer” by Rick Seeger.

    Skynet may not be a joke. The United States military is increasingly looking into how artificial intelligence can help supplement drones and fighter jets on the battlefield.

    One new project … is called Avatar, and calls for the Pentagon to pair high-tech “fifth-generation” fighter jets like the F-22 Raptor and F-35 Joint Strike Fighter with unmanned versions of older jets like the F-16 Fighting Falcon or F/A-18 Hornet, which would be flown without a pilot for the first time. The Avatar effort was previously called Skyborg by SCO and is known as “the Loyal Wingman” concept in the Air Force, Roper said. The program will require unmanned fighters to act with enough autonomy that the pilot in the manned jet doesn’t have to direct them all the time.” – by Dan Lamothe

    Why are Smart People So Afraid of AI?

    You may have read that Elon Musk is weary of an AGI becoming an artificial super intelligence, which according to Nick Bostrom, could happen minutes to weeks after AGI is established. Elon Musk has read Nick Bostrom’s work and it’s part of why he’s helped create Open AI, a collaborative effort to create an open-source AI.

    The hope with Open AI is to reduce the risk that one person, company, or country could create and control an AI that could cause unintended (or intended) negative consequences. But what types of negative consequences?

    The Paperclip Maximizer

    The “paperclip maximizer” is a thought experiment showing how an artificial general intelligence, even one designed competently and without malice, could ultimately destroy humanity. The thought experiment shows that AIs with apparently innocuous values could pose an existential threat.

    An extremely powerful optimizer (a highly intelligent agent) could seek goals that are completely alien to ours (as humans), and as a side-effect destroy us by consuming resources essential to our survival. I thought it would have been better to use red staplers, but that’s just me. 🙂

    The Rise of Artificial Intelligence

    When I was 14 I called to get a ‘free’ magazine subscription in the mail. The operator asked me what I was interested in and I said ‘artificial intelligence’. Immediately, an unknown man on the line bust out laughing, but quickly contained himself. No one is laughing now.