Tag: Information Systems

  • What Happens in Your Brain When You Think a Thought?

    A Deep Dive Into Neural Processing

    Have you ever wondered what actually happens in your brain when you think a thought, hear a sound, or recall a memory? We throw around phrases like “processing information” or “making a decision,” but under the hood, your brain is orchestrating a staggeringly complex dance of electrical impulses, chemical signals, and synchronized network activity.

    This post breaks down that process into six major stages—tracking how a sound becomes a conscious thought and how that thought is shaped by memory, emotion, and attention.

    This system of distributed, dynamic thought processing in the brain—where concepts are not static but reenacted through multiple sensory and associative systems—directly supports the foundation of innovation. It’s not just about storing ideas but being able to recombine them in novel ways. This is the cognitive groundwork behind what Cal Newport, drawing from Steven Johnson and Stuart Kauffman, calls the adjacent possible. Just as a thought in the brain forms by connecting distant neural assemblies, so too do groundbreaking ideas form by linking once-unrelated concepts at the edges of our understanding. When your brain lights up in multiple regions simultaneously, it’s not noise—it’s the birth of potential. To see how this plays out at the frontier of innovation, intelligence, and idea synthesis, read this post on The Adjacent Possible, Intelligence, and the Logic of Innovation.

    Synapse-Level Mechanics: Where Thought Begins

    At the most fundamental level, thoughts are patterns of electrical activity between neurons. These spikes, called action potentials, travel down axons and arrive at synapses, where they trigger the release of neurotransmitters. These chemicals cross the synaptic gap and influence whether the next neuron fires or stays silent.

    Repeated activation strengthens these synaptic connections (long-term potentiation), while disuse weakens them (long-term depression). These dynamic links form the “cell assemblies” that encode everything from your name to your favorite song.

    The way your brain handles thoughts—through distributed networks, sensory reenactments, and dynamic integration—forms the biological basis of heuristic thinking. Heuristics aren’t just mental shortcuts; they’re efficiency strategies that reflect how the brain actually operates when navigating complexity. Instead of computing every possibility, your brain selects the most salient patterns, prioritizes action, and leans on past associations—all through rapid-fire synaptic processing. Whether you’re using trial and error, working backward, or relying on a gut instinct, you’re leveraging the same systems that enable neurons to fire in coordinated patterns to create meaning from ambiguity. For a breakdown of practical, proven heuristic strategies your brain is likely already using, check out this post on heuristic thinking and mental models.

    From Air Vibrations to Cortical Code: Tagging the Sound with Meaning

    Take sound, for example. Vibrations in the air reach your ears and stimulate hair cells in the cochlea, converting physical motion into nerve impulses. These signals travel up to the brainstem, then to the thalamus, and finally to the primary auditory cortex (A1), where they’re mapped based on frequency. Your brain essentially builds a 2D “sound map” using spatially-organized neurons.

    Once the auditory signal reaches the cortex, the hippocampus jumps in. This region acts like an index system—it links the new sensory input with existing memory, emotion, and knowledge stored in other brain regions. If you hear a song from your childhood, it’s the hippocampus that ties the music to your memories, emotions, and even the smell of your childhood home.

    These associations are built by co-activating neuron assemblies and binding them through synaptic plasticity. When one part of the network lights up later, the rest can be reactivated too—this is how recall works.

    Why Multiple Regions Light Up at Once

    A thought rarely involves just one part of the brain. Your brain uses synchronized oscillations—bursts of activity in specific frequency ranges (like gamma waves)—to let distant brain regions communicate. It’s a bit like tuning into the same radio frequency across the cortex.

    This synchronization allows visual areas, auditory regions, emotional centers, and memory stores to coordinate, building a unified experience out of fragmented data.

    Selecting What Matters: The Gatekeeper System

    With so much happening at once, how does the brain choose what becomes conscious thought?

    That job falls to a trio: the prefrontal cortex, basal ganglia, and thalamus. These areas filter and prioritize neural activity, amplifying what’s important and suppressing the rest. Dopamine plays a key role here, signaling what’s rewarding, novel, or worth your attention.

    The “winning” signals are broadcast across the global workspace—a network that’s believed to be the basis of conscious awareness. Competing thoughts or sensory data are suppressed. Only one signal gets through at a time.

    Weaving It Into a Narrative

    Once a thought makes it into the workspace, it doesn’t just float in isolation. The default mode network (DMN), which is active during introspection and daydreaming, steps in to stitch that thought into your ongoing mental narrative. This is where meaning, self-relevance, and memory encoding take place.

    The DMN ensures that your thoughts are not just abstract data but part of a coherent story—you.

    The neural mechanics behind thought formation—where cues activate sensory networks, processes unfold across distributed circuits, and outcomes are reinforced—map almost perfectly to what Charles Duhigg calls the habit loop: cue, process, reward. At the synaptic level, your brain favors efficiency. Repeated patterns become default pathways not because they’re optimal, but because they’re fast and familiar—this is the essence of habit formation. What Duhigg describes on the organizational level as “keystone habits” mirrors how the brain streamlines complex activity into automatic routines. In both cases, it’s not about removing the habit (or network), but rewiring the middle: swapping in a new process while keeping the same cue and reward. To explore how this applies to individuals, companies, and culture, check out this post on The Power of Habit and organizational behavior.

    Just as your brain coordinates countless inputs across multiple networks to form a single thought, organizations also rely on complex information systems to collect, process, and synthesize data into actionable insight. In both cases, the value lies not only in storing information, but in how that information is retrieved, prioritized, and made meaningful within a broader context. The brain’s “global workspace” mirrors what a well-structured Management Information System (MIS) does for a business: integrating disparate data sources, filtering noise from signal, and surfacing the most relevant information for decision-making. To explore how this same principle scales up from neurons to networks, read this related post on Information Systems and MIS and how they function as the cognitive backbone of modern enterprises.

    Final Thoughts

    A single thought is not a “thing” stored somewhere in the brain. It’s a dynamic event—a flash of electrical and chemical activity spread across a network, shaped by memory, filtered by attention, and bound together through rhythms. It’s messy, beautiful, and efficient. And it’s happening thousands of times a second.

    If you want to shape what you think and remember, focus on attention, repetition, and meaning. Your brain will do the rest.

  • The Adjacent Possible

    In Cal Newport’s book, So Good They Can’t Ignore You: Why Skills Trump Passion in the Quest for Work You Love, of which Haden’s article is about, Cal talks about the “adjacent possible”, which is:

    A term taken from the science writer Steven Johnson, who took it from Stuart Kauffman, that helps explain the origins of innovation. Johnson notes that the next big ides in any field are typically found right beyond the current cutting edge, in the adjacent space that contains the possible new combinations of existing ideas. The key observation is that you have to get to the cutting edge of a field before its adjacent possible – and the innovations it contains – becomes visible.

    I felt this book was a good example of that for me because I was just about to write something similar. It seems this is possible because Cal and I both have similar reading habits and a desire to find out how to do what we love. This book builds on principals from Seth Godin, Malcolm Gladwell, Derek Sivers, Daniel Pink, and Reid Hoffman. I will admit that I was a believer in the “passion mindset” and although I thought I was a hard worker, I tended to avoid the mental strain Cal talks about that’s so important to deliberately practice in order to build career capital (these are two terms Cal introduces). This book really does a good job of turning the passion mindset on it’s head while giving you solid, practical advice about how to get the things you want in a job: control/autonomy. The bad news is that it takes a long time, will hurt, and requires a lot of work.

    Talk about CIV and UFO Defense

    CIV Tech Tree

    XCOM UFO Defense Research

    I’m on the cusp of formulating a new way to think about intelligence

    I’m thinking about this in terms of a presentation, rather than a blog post, but the general idea is that one way to measure intelligence is a person or system’s ability to cross-reference ideas.

    Logic Puzzles

    When you were a child, you may have been asked to fill out simple logic puzzles in math class. They were simple rows and columns and a couple of sentences, which you had to fill in using logic.

    Imagine if all ideas in the world occupied both all of the columns and all of the rows in a giant logic puzzle and the more ideas are learned, the more columns and rows are added to this puzzle. It is from this idea that I present the following stories.

    The Rosetta Stone

    Despite being discovered in 1799 by Napoleon’s troops in Egypt, it wasn’t until 1822 that a man named Champollion was able to decode the Ancient Egyptian hieroglyphs, the middle portion Demotic script.

    He was only able to do this by cross-referencing not just the Ancient Greek, but also Ancient Coptic and other hieroglyphs found at that time. It was this cross-language connections that ultimately helped decrypt the language.

    The First Web App

    In 1995 Paul Graham, founder of Y-Combinator, wanted to write an e-commerce application, but didn’t want to write it for Windows. After seeing an advertisement for Netscape, he had an idea to try running his Unix application in a browser.

    In order for Graham to create Viaweb as the first web app, Netscape, the World Wide Web, the Internet, and Unix all had to be in place first. It was from these technologies that allowed The Adjacent Possible to occur.

    The Adjacent Possible

    Innovation in any field are typically found right beyond the current cutting edge, in the adjacent space that contains the possible new combinations of existing ideas. The cutting edge has to exist in a field before its adjacent possible – and the innovations it contains – becomes visible.

    This explains why things like the discovery of oxygen or DNA occur at the same time around the world because the tools available to do so become available. However, the existence of technology is not enough, the person has to have the intelligence to connect the pieces together.

    The Wright Brothers

    By the time The Wright Brothers started working on powered flight, “flight” by gliders was already a thing. The problem was not ‘lift’ – the mechanics of that were known. The problem was in maintaining flight and controlling the aircraft.

    Because The Wright Brothers were avid tinkers and ran a bicycle shop, they were able to apply ideas from how a bicycle maneuvers through space and recent developments from lightweight aluminum engines to overcome powered flight.

    What is Intelligence?

    The ability to cross-reference ideas requires both the knowledge of the ideas and the ability to recall and compare those ideas to each other. The ability to do this as a human generally requires expert domain knowledge or the cross-pollination of ideas across domains in a more holistic view, but some level of depth is required in at least one domain.

    Compare this to a computer program that could be programmed to compare ideas at a massive scale. Every time a new paper is published or a new gadget is created, the ‘rows’ and ‘columns’ get bigger and every previous idea can now be compared. The results from a computer program doing this task in a holistic way may result in ideas and outcomes that a human would never come to on their own.

    Uploaded on Apr 14, 2008
    Lawrence Barsalou PhD Emory University. The human conceptual system contains categorical knowledge that supports online processing (perception, categorization, inference, action) and offline processing (memory, language, thought). Semantic memory, the dominant theory of the conceptual system, typically portrays it as modular, amodal, abstractive, and static. Alternatively, the conceptual system can be viewed as non-modular, modal, situated, and dynamic. According to this latter perspective, the conceptual system is non-modular and modal because it shares representational mechanisms with modality-specific systems in the brain, such as vision, action, and emotion. On a given occasion, modality-specific information about a category’s members is reenacted in relevant modality-specific systems to represent it conceptually. Furthermore, these simulations are situated, preparing the conceptualizer for situated action with the category. Not only do these situated simulations represent the target category, they also represent background settings, actions, and mental states, thereby placing the conceptualizer in the simulation, prepared for goal pursuit. Because the optimal conceptualization of a category varies across different courses of situated action, category representations vary dynamically and are not static. Furthermore, different situations engage different neural systems dynamically when representing a category. Under some circumstances, the linguistic system plays a more central role than simulation, whereas under other circumstances, simulation is more central. Thus, the concept for a category appears to be a widely distributed circuit in the brain that includes modality-specific and linguistic representations, integrated by association areas. Across situations, these circuits become realized dynamically in diverse forms to provide the knowledge needed for cognitive processing. Behavioral and neural evidence is presented to support this view.

    If you like this, you might like How to Work a Life of Purpose.

  • The Fragility of Information

    How stable is the state of information storage on the planet today? How much do you know about the life of your great grandfather? How much of your own past would you remember if you lost all access to your personal data? And how likely is it that all of the world’s information could be lost?

    Information is inherently fragile.”

    In recent past we printed out copies of paper or burned CDs or DVDs to back up information stored online or in computers. But now the trend is to digitize as much information as possible and go ‘paperless’. We make ourselves feel better by creating ‘backups’ and making our systems ‘redundant’, but how stable are these information systems, really?

    If for example, one file was damaged via corruption, every time it was copied thereafter, it would be a copy of the corrupted file. When you went to restore the file from a burned CD, you may find that the tiny ‘pits’ on the CD have decayed and the CD is no longer readable.

    Or when you went to access the information, you found that it was stored on a medium that is no longer accessible (such as a floppy disk, which is only readable from a floppy drive) or a file type that requires a computer program or operating system that no longer exists.

    While the latter type, what I’ve aforementioned as The White Album Problem, can be overcome through a constant and persistent ‘copy, transfer, and upgrade’ cycle, it doesn’t account for the former type which is bad data and certainly doesn’t overcome the Worst Possible Outcome.

    The Worst Possible Outcome

    The Worst Possible Outcome is that we, as a society have digitized all of the world’s information and stored it on electronic information systems that run on electricity. There are no paper copies of any information, but it is all available at our fingertips. Humanity rejoices!

    But then one day a freak solar flare from the sun bathes the earth in electromagnetic radiation, destroying all electronics, and plunging us into total darkness. There are no paper books, no paper maps, and no paper manuals on how to put Humpty Dumpty back together again.

    What has been is what will be, and what has been done is what will be done, and there is nothing new under the sun. Is there a thing of which it is said, “See, this is new”? It has been already in the ages before us. There is no remembrance of former things, nor will there be any remembrance of later things yet to be among those who come after.” -Ecclesiastes 1:9-11

    How many times has near total information loss happened before? Let’s look at some modern examples of information loss. At NASA they forgot how the Saturn V rocket worked and are now trying to figure out how to make it work again:

    “A team of engineers at NASA’s Marshall Spaceflight Centre in Huntsville, Alabama, are now dissecting the old engines to learn their secrets…Testing pieces of a rocket that hasn’t fired in nearly 50 years isn’t easy. They aren’t exactly lying around shops in test condition, they’re in storage units and museums.”

    But the knowledge loss wasn’t limited to the rockets. After the Apollo program was ended, the crawlers used to move the Saturn V rockets were set aside and those who built them moved on to other projects. When the Space Shuttle project was first growing, NASA had to spend large amounts of money putting the crawlers back together, because the technology had practically been lost.

    In the May/June 1973 issue of Saudi Aramco World, Richard W. Bulliet wrote in “Why They Lost The Wheel”, “Eastern society wilfully abandoned the use of the wheel, one of mankind’s greatest inventions,” opting instead to use camels, which were more suited for travel than the horses and chariots used earlier in Egypt and Rome.

    There are other examples of lost technology such as The Antikythera Mechanism, “Discovered in a shipwreck in 1900, this device was built around 150-100 BC with levels of miniaturization and mechanical complexity that weren’t replicated until around 1500 years later. After much speculation, in 2008 scientists determined that it tracked the Metonic calendar, predicted solar eclipses, and calculated the timing of the Ancient Olympic Games.”

    Damascus Steel swords, which were generally made in the Middle East anywhere from 540 A.D. to 1800 A.D., were sharper, more flexible and harder/stronger than other contemporary blades. They were also visually different, having a marbling pattern called “damask,” that hinted at a special technique/alloy. But production gradually stopped over the years, and the highly-guarded technique was lost – no modern smiths or metallurgists have been able to definitively solve the techniques/alloys used in forging those swords.

    John Ochsendorf, the architectural rebel who champions ancient engineers recently wrote that, “Old masonry buildings are stressed very low, and so the fundamental issue is that we had knowledge accumulated over centuries, or even millennia, which with the Industrial Revolution was essentially thrown out and we don’t really build like that anymore. Engineers are taught today in universities that there are really two dominant materials—steel and concrete—and so when they come to an old structure, too often we’re trying to make old structures conform to the theories that we learned for steel and concrete; whereas it’s more useful generally to think of them as problems of stability and geometry, because the stresses in these monuments are very, very low. At root, the fundamental issue is that we’ve lost centuries of knowledge, which has been replaced by other knowledge about how to build in steel and concrete. But today’s knowledge doesn’t necessarily map easily onto those older structures. And if we try to make them conform to our theories, it’s very easy to say that these older structures don’t work. It’s a curious concept for an engineer to come along to a building that’s been standing for 500 years and to say this building is not safe.”

    How does information get lost?

    Information is inherently fragile. There are many ways that it can be lost. From data corruption to fire to war to flooding to electromagnetic pulse to the simple act of forgetting to record the information in the first place. The latter is the most common and most dangerous of them all.

    At NASA (and other large [and small] organizations) there is a ‘group think’, shared-brain mentality where the corporate knowledge of the organization is enough to get by during finite periods of time. Organizations can operate as long as there is not too much turnover or brain drain.

    But what happens when a large portion of the workforce retires at the same time, or a region suffers a local catastrophe, or information is not thought to be needed now, but may be very important later? What happens when the information is never recorded in the first place?

    Have you ever heard the term “Recorded History“? Have you ever wondered why there seems to be a time in history when there is no written record of any events before that time? What could have happened to prevent knowledge transfer?

    It could be false pride that is leading us to believe that we are the only generation of humans to get to this point in technological evolution or it could be that previous generations digitized there information to the point where no historical evidence of them now exists.

    If someone a hundred years from now was tasked with proving you existed, what information would they use to prove that point? Would they rely on government databases? An abandoned Facebook account? What if there were no computers? No Internet access? How then?

    The reason we know even what we know now about Jesus, famous leaders, and former Presidents, is because people wrote this information down on a piece of paper and someone copied it. When computers came along, the information was digitized and copied further, but if we stop copying, the information stops.

    When the Bible was being copied by ancient scholars, there were error correction measures put in place to ensure that verses were copied exactly as intended. What memory correcting mechanisms do we have for modern day digital photographs, documents, and other digital information?

    What are we supposed to do?

    There are two things that have worked in the past:

    1. Copying information to new formats using error correcting mechanisms
    2. Varying the ways in which information is stored across mediums
    3. Languages grown, change, and die out, requiring new translations over time

    Here’s some practical examples using this blog as the example:

    1. WordPress may not last forever so at some point I may have to switch to a new platform and transfer and translate my data to the new format
    2. A catastrophe could take out the servers that host this information or we could lose the ability to read it so it could be printed
    3. If modern, subsequent language morphs from English to some new language, then the blog would need translated over time

    In general, things people care about the most are printed, backed up, copied, and distributed, but sometimes events conspire to erase even the most important information. Information is inherently fragile and we must always be vigilant to keep that which is most dear to our hearts.