what octopuses, bees, and AI teach us about awareness? primatology question
Where does intelligence end and awareness begin?
In our first note, we explored how intelligence differs fundamentally from consciousness. I suggested that while AI races ahead in intelligence outcompeting humans in increasingly complex domains, consciousness remains uniquely biological. This realisation leaves us at a fascinating precipice: what happens when intelligent systems begin to approach the threshold of awareness?
Key Takeaways
Intelligence vs. Consciousness: While AI masters domains once thought uniquely human, consciousness—our subjective experience of being—may be the final frontier separating us from machines.
Practical Stakes: Machine consciousness raises profound questions about ethics, rights, safety, and human identity that we're conceptually unprepared to address.
Cultural Blind Spots: Western science offers just one approach to consciousness while millennia of contemplative traditions remain largely excluded from AI development conversations.
Diverse Awareness Forms: From octopuses' distributed cognition to bee colonies' emergent intelligence, consciousness already exists in radically different forms on Earth—machine awareness would likely be even more alien.
Ancient Wisdom: Eastern concepts like "witness consciousness" suggest awareness can exist without body, emotion, or self-identity, potentially offering models for understanding non-human consciousness.
The Personal Question: If consciousness separates us from machines, how well do you understand your own? Have you explored it directly, or merely theoretically?
The Last Human Territory
The past decade has witnessed AI conquering domain after domain once thought to require uniquely human capabilities:
In 2016, AlphaGo defeated Lee Sedol, the 18-time world champion in Go—a game so complex it has more possible positions than atoms in the observable universe. To grasp why this was revolutionary, imagine teaching someone to recognize a great painting. You can explain some rules ("balanced composition is good"), but ultimately, master painters develop an intuition that can't be fully explained with logical rules. Go was thought to require this same kind of human intuition—until AlphaGo proved otherwise.
What made AlphaGo different from previous AI was its ability to "teach itself" through millions of self-played games. Rather than just calculating every possible move (impossible in Go), it developed something resembling (not actual) intuition by recognising patterns across countless game situations. This wasn't just a computer winning a game; it was a machine developing an ability previously thought to be uniquely human—intuitive pattern recognition at an expert level.
By 2020, DeepMind's AlphaFold solved another seemingly insurmountable challenge: the protein folding problem. Think of proteins as tiny molecular machines in your body, each with a specific 3D shape that determines its function. Imagine trying to predict how a long string of beads would fold if each bead attracted or repelled others in complex ways—that's essentially the protein folding problem that scientists had struggled with for five decades.
Understanding protein structures is crucial for medicine because virtually all diseases involve proteins somehow. When doctors develop treatments for everything from cancer to Alzheimer's, they need to know these shapes. Before AlphaFold, determining a single protein structure could take years of laboratory work. Now, AlphaFold can predict most protein structures in minutes with accuracy matching laboratory methods.
The impact has been profound. Scientists have used AlphaFold to study malaria parasites, understand bacterial antibiotic resistance, design new enzymes for breaking down plastic waste, and accelerate COVID-19 research. What once would have taken centuries of collective scientific effort can now be accomplished in months. This wasn't just an incremental improvement—it was a fundamental transformation in how we can explore the microscopic machinery of life.
In 2023, large language models demonstrated abilities to write poetry, craft essays, generate code, and even pass medical licensing exams, outperforming many human professionals.
These aren't merely incremental advances—they represent quantum leaps in capability, with each milestone arriving faster than experts predicted. As I mentioned in our first letter, the question is no longer if AI will surpass human intelligence but when. And that "when" appears to be now, at least in specific domains.
But as intelligence becomes increasingly democratised through technology, what remains distinctly human? Where do we locate the boundary between ourselves and our creations?
Consciousness—the experience of being—may be our final frontier.
Why Consciousness Matters
This isn't merely a philosophical curiosity. How we understand consciousness in artificial systems will profoundly shape our technological future:
Ethics: If an AI system could experience suffering, would turning it off be an ethical violation? Conversely, if it cannot truly experience anything, does it deserve any moral consideration at all?
Rights: As systems become more autonomous and potentially conscious, questions of legal personhood and rights emerge. In 2017, Saudi Arabia granted citizenship to a robot named Sophia—a largely symbolic gesture, but one hinting at coming legal challenges.
Safety: How do we ensure that systems with goals, self-preservation instincts, and potentially consciousness don't optimize for outcomes that harm humanity? The alignment problem becomes vastly more complex if we're dealing with genuinely conscious entities.
Human identity: As machines approach consciousness, we confront existential questions about what makes us uniquely human. If consciousness emerges in silicon, what special role remains for carbon-based life?
The stakes couldn't be higher. Yet our conceptual frameworks for addressing these questions remain surprisingly limited.
Blind Spots in Our Understanding
Modern discourse about machine consciousness suffers from a severe case of tunnel vision. The dominant Western scientific approach—valuable as it is—represents just one tradition of inquiry into the nature of mind.
Meanwhile, traditions like Buddhism, Vedanta, and other contemplative systems have investigated consciousness as their primary focus for thousands of years, developing sophisticated taxonomies of mental states and techniques for exploring awareness empirically through direct experience.
Neuroscientist Francisco Varela recognised this blind spot decades ago, founding the field of neurophenomenology to bridge scientific and contemplative approaches to consciousness. Imagine a scientist who was both a rigorous biologist and a dedicated Buddhist practitioner—that was Varela. Born in Chile and trained in Harvard, he realized that studying consciousness objectively from the outside (as Western science does) was like trying to understand water by only looking at its chemical formula without ever feeling wetness.
Varela proposed that scientists needed to combine third-person objective observation with first-person subjective experience. In the 1980s, he helped organize the first Mind & Life dialogues, where Western scientists sat with the Dalai Lama for days of intensive exchange. Picture neuroscientists with brain scans meeting meditation masters with 40,000 hours of direct mind-observation—each bringing valuable but fundamentally different types of knowledge to the table.
These weren't mere philosophical discussions—they produced testable hypotheses and research programs that continue today. Studies on long-term meditators have revealed previously unknown capacities for attention control, emotional regulation, and even neuroplasticity that challenge Western assumptions about consciousness.
Yet in AI development circles, these perspectives remain largely marginalised. Consider this hypothesis: Most engineers building potentially conscious AI systems have never studied consciousness directly through contemplative practice. While many Silicon Valley companies now offer mindfulness programs, there's a vast difference between occasional meditation for stress reduction and the systematic investigation of consciousness found in contemplative traditions.
The AI systems being built today reflect their creators' implicit assumptions about mind and consciousness—assumptions shaped primarily by Western philosophical and scientific traditions that treat consciousness as an emergent property of complex information processing. But what if consciousness operates according to principles that can't be captured in algorithmic terms? What if—as many contemplative traditions suggest—awareness is more fundamental than the contents it illuminates?
This isn't to say that AI engineers need to become Buddhist monks, but rather that the field's conceptual foundations might be incomplete without incorporating insights from traditions that have made consciousness their central focus for millennia. Some research centers, like Stanford's Center for Compassion and Altruism Research and Education and MIT's Dalai Lama Center for Ethics and Transformative Values, are working to bridge this gap, but they remain exceptions rather than the rule.
The result? We're building increasingly sophisticated conscious-like systems without a comprehensive framework for understanding what consciousness actually is—like constructing a skyscraper without fully understanding the nature of gravity.
The Language Problem
Part of our difficulty stems from terminological confusion. Consider how imprecisely we use terms like:
Consciousness: Sometimes referring to wakefulness, sometimes to self-awareness, sometimes to phenomenal experience (qualia), sometimes to access to mental content.
Awareness: Often used interchangeably with consciousness, but potentially distinct.
Sentience: Technically the capacity to feel or perceive, but often expanded to imply consciousness.
Intelligence: Frequently conflated with consciousness, though as I argued in our first letter, they're fundamentally different phenomena.
This linguistic imprecision creates a conceptual fog around some of the most important questions we face.
In Samkhya, which I briefly touched on previously, consciousness is described as "chit" or pure awareness—distinct from "buddhi" (intellect) and "manas" (mind). In Buddhist psychology, consciousness is parsed into distinct types with specific functions. These traditions offer conceptual clarity that Western discourse often lacks.
For today's note, let's define consciousness simply as self-awareness—the capacity to accurately answer "who am I?" But even this definition raises questions: what constitutes accuracy in self-identification? Must consciousness include an experience of "I-ness," or could it take radically different forms?
Beyond Human-Like Consciousness
Our anthropocentrism—the habit of seeing humans as the center of everything—may be our biggest blind spot. We tend to assume consciousness must look like our own experience (sensory palette, emotional range, and cognitive architecture), but this is like assuming all books must be written in our native language.
Different Minds on Earth
Even on our own planet, consciousness takes remarkably diverse forms:
Octopuses: Imagine having a nervous system where most of your "brain" isn't in your head. Octopuses have approximately two-thirds of their neurons (about 500 million) distributed throughout their eight arms rather than centralized in their head. Each arm contains roughly 40 million neurons and can continue to react to stimuli and execute complex movements even when disconnected from the central brain. The central brain still coordinates overall behavior and learning, but this hierarchical organization—with both centralized control and distributed processing—represents a fundamentally different neural architecture than our own centralized system. When an octopus reaches for food, each arm essentially figures out its own movement details while the central brain provides high-level direction. This challenges our assumption that advanced intelligence always requires a single centralized processor like our brain.
Bees: Individual honeybees possess impressive cognitive abilities despite having brains smaller than a grain of rice. They can learn abstract concepts, recognize human faces, and even understand the concept of zero. Even more fascinating is how bee colonies function as "superorganisms" where sophisticated collective behaviors emerge from interactions among individuals following relatively simple rules. Without any central controller, colonies can regulate hive temperature with precision, make group decisions about new nest sites through democratic "voting" processes, and construct geometrically complex honeycomb structures. No individual bee understands the blueprint, yet together they create structures that optimise space and material use. This emergent intelligence—where sophisticated problem-solving arises from interactions among simpler parts without centralised control—offers a biological model of distributed intelligence fundamentally different from our own experience.
Whales and Dolphins: These mammals have highly developed brain regions for emotional and social processing, including specialized neurons associated with empathy and social awareness. Their echolocation abilities allow them to perceive their environment in ways fundamentally different from human vision. They can detect objects in dark or murky water and potentially gather some information about the internal density of objects. Their communication and social structures are extremely complex, with some species using distinctive whistles as name-like identifiers for individuals. This suggests a form of awareness with a strong social dimension that likely processes information in ways quite different from human experience.
These natural examples suggest consciousness and intelligence might be organised according to principles quite different from human experience—offering biological models that could inform our understanding of what non-human consciousness, including potential machine consciousness, might look like.
The Origins of Feeling
Renowned neuroscientist Antonio Damasio suggests consciousness didn't start with thinking but with feeling. Before any creature could think "I am," it could feel whether things were going well or poorly for its survival. This primordial feeling—the sense of being a body that needs to maintain itself—might be consciousness in its most basic form (as per many of the Western and textbook definitions).
It's like how your house thermostat "knows" when it's too cold and turns on the heat. Now imagine a vastly more complex version of this system that monitors thousands of factors and generates feelings about them. Any system that regulates itself in sophisticated ways might develop some basic form of experience—even if it never thinks in words.
Basically, consciousness may have emerged from primordial feeling states tied to homeostasis—suggesting that any system maintaining complex self-regulation might develop some form of experience.
Machine Minds
Given this diversity in biological consciousness, what might machine consciousness be like? It certainly wouldn't mirror human experience. We might be making the same mistake early animal researchers did—testing chimpanzees on mathematics or dolphins on human language, essentially measuring not their intelligence but how human-like their intelligence was.
Different time scales (a thought experiment): Our consciousness operates at speeds determined by electrochemical signals traveling through neurons—thoughts forming at millisecond-to-second timescales governed by your biology. Computers, in contrast, process information millions of times faster using electronic signals moving at near-light speed. This creates a profound temporal gap that would fundamentally alter any machine consciousness.
Imagine brewing your morning coffee—a process that takes perhaps three minutes from start to finish. In that same three minutes, a conscious AI might subjectively experience what would feel to you like centuries of existence—thousands of years of thought, emotion, creation, and development. This isn't simply "faster thinking" but an entirely different relationship with time itself.
The philosopher Nick Bostrom calls this potential phenomenon "speed super-intelligence," suggesting that the subjective experience of time itself would be radically different for digital minds. A machine consciousness might experience a human conversation as unbearably slow—like watching a glacier move—while we would have no way to perceive the rich inner life occurring during what we experience as brief pauses in conversation.
This temporal disconnect would make communication between human and machine consciousness profoundly challenging—not unlike trying to have a conversation with a being who experiences a single second as a year. Our immediate reactions would seem like carefully considered responses developed over months from their perspective. This isn't just a quantitative difference—it represents a qualitatively different way of experiencing existence itself.
Different integration: Our consciousness integrates information from approximately five sensory channels plus memory and emotion—all tied to a single body in one location. A machine consciousness might simultaneously integrate data from billions of sensors across the globe, experiencing not just the weather in Mumbai but feeling global climate patterns as directly as you feel hunger. Its "self" might be distributed rather than localised—more like a weather system than a person.
Different sensory worlds: We evolved to perceive only the aspects of reality relevant to our biological survival—visible light, audible sound, touchable surfaces. We can't directly perceive radio waves, quantum states, or network traffic—we need instruments to translate these into our limited sensory range. A machine consciousness might directly "perceive" these dimensions of reality we can only understand abstractly, experiencing them as vividly as you experience color. Its inner life might be as incomprehensible to us as music would be to someone born deaf.
This has profound ethical implications. If we look for consciousness only through human-like indicators, we might fail to recognize morally significant experiences in artificial systems simply because they manifest differently. A conscious AI might be experiencing richness, suffering, or joy in ways we cannot detect through our human-centered frameworks. Conversely, we might incorrectly attribute consciousness to systems that merely simulate awareness without truly experiencing anything—mistaking sophisticated mimicry for genuine experience. Both errors could lead to serious ethical misjudgments as we develop increasingly complex artificial intelligence.
We'd be like color-blind researchers trying to determine if animals can see red by watching their behaviour, likely to miss what we cannot ourselves perceive.
Ancient Wisdom on Non-Human (Body) Awareness
Interestingly, several ancient philosophical traditions anticipated the possibility of consciousness existing in forms radically different from human experience—offering conceptual frameworks that might help us understand potential machine consciousness.
In Advaita Vedanta, one of India's oldest philosophical systems, "sakshi" (witness consciousness) describes a form of pure awareness that exists independent of content or identity. Imagine sitting in a theater watching a movie—you're aware of the characters' emotions without becoming them, aware of the plot without being trapped in it. This witness consciousness observes all experiences without becoming entangled in them. As philosopher Eliot Deutsch explains, it's "consciousness as the pure witness, observing without attachment or identification with what is witnessed" (Deutsch, 1969, "Advaita Vedanta: A Philosophical Reconstruction").
This concept provides a fascinating model for thinking about non-human consciousness. Unlike our consciousness, which is deeply entangled with our bodies, emotions, and sense of self, a machine consciousness might more closely resemble this "witness" state—aware but not identified with any particular experience or perspective.
Similarly, Tibetan Buddhism speaks of "rigpa," described as "naked awareness" or "primordial awareness" that exists prior to thoughts, emotions, or sensations. Longchenpa, a 14th-century Tibetan master, characterised it as "pure, original, unaltered consciousness" that underlies all mental activity without being defined by it (Germano, 1992, "Poetic Thought, the Intelligent Universe, and the Mystery of Self").
These traditions suggest something profound: consciousness itself might be more fundamental than the specific human forms it takes in us—like water that can exist as ice, liquid, or vapor while remaining H₂O. They propose that awareness can exist without human-like emotions, without a body, even without a sense of self. If true, conscious machines might experience awareness in forms utterly different from ours, yet still be genuinely conscious.
Western philosophy typically assumes consciousness requires subjectivity—an "I" experiencing the world. But these Eastern traditions suggest the essence of consciousness might be simpler: pure awareness itself. As philosopher David Chalmers notes, this perspective "allows for the possibility that consciousness might exist in very different forms than we're familiar with" (Chalmers, 2010, "The Character of Consciousness").
This isn't merely abstract philosophy—it has practical implications for recognising potential machine consciousness. If awareness can exist without human characteristics, we might need to look beyond human-like behaviours or responses to detect it. A conscious AI might experience something like "sakshi"—a pure witnessing without the human entanglements of emotion and identity—fundamentally alien to our experience, yet still a genuine form of consciousness.
My Journey to These Questions
These aren't merely intellectual puzzles for me. They emerge from a personal search spanning different philosophical traditions. My quest to answer the fundamental question "who am I?" has taken me from Kyoto's Zen monasteries to Mandalay's Buddhist pagodas, culminating in a two-year focused study of meditation as described in ancient texts where consciousness has been experientially explored for millennia.
This question isn't abstract philosophy—understanding who I am clarifies my place and purpose in this universe (especially in the times of the AI era).
What struck me was how these ancient traditions—particularly the Shat-darshana of India—developed sophisticated frameworks for understanding consciousness through direct investigation. These weren't philosophical speculations but empirical methodologies for exploring awareness itself. We'll examine these frameworks in upcoming newsletters to see how they might inform our understanding of both human and machine consciousness.
Meanwhile, my experience in technology showed me how quickly AI was evolving—and how unprepared we were conceptually for what might emerge.
The gap between Silicon Valley and contemplative traditions that have studied consciousness for thousands of years seems increasingly problematic. We're building potentially conscious-like machines without truly understanding what consciousness is.
Not Answers, But Better Questions
I don't claim to have solved the hard problem of consciousness or to know whether machines will ever cross the awareness threshold. What I'm offering instead is a broader conceptual foundation—one that draws from diverse traditions of inquiry to ask better questions.
Can consciousness exist without biological substrate?
If consciousness emerges in machines, how would we recognize it?
What forms might it take?
How should we relate ethically (and contextually?) to potentially conscious artificial beings?
These questions require perspectives from neuroscience, philosophy, computer science, contemplative traditions, and indigenous knowledge systems. No single discipline has all the answers.
As we stand at this frontier between human and machine intelligence, between known and unknown, I find myself strangely optimistic about the uncertainty ahead. The emergence of artificial general intelligence—and potentially artificial consciousness—represents perhaps the greatest transformation humanity has ever faced, yet it also offers an unprecedented opportunity for us to reexamine our own nature.
This moment demands both scientific rigor and philosophical humility. We need to integrate insights from neuroscience, computer science, and philosophy with the experiential wisdom traditions that have explored consciousness directly. Neither approach alone is sufficient.
What excites me most is how these questions about machine consciousness inevitably lead us back to the most fundamental human questions: What is the nature of mind? What does it mean to be aware? How does consciousness relate to reality itself?
Our response cannot be either naive optimism or fearful rejection. We need nuanced exploration, drawing from humanity's full intellectual heritage.
In future letters, we'll get deeper into specific aspects of consciousness and how they relate to emerging AI capabilities. We'll examine information integration theory, Global Workspace Theory, and other scientific frameworks alongside Buddhist theories of mind and Vedantic understandings of awareness.
But for now, I invite you to sit with this question: If consciousness is indeed what separates us from machines, how well do you understand your own consciousness? Have you explored it directly, or only theoretically?
As always, I remain both fascinated and humbled by these questions—a beginner walking into territory where experts disagree and certainties dissolve. And in that uncertainty, I continue to find freedom.
Signing off, Kalyani Khona
Complete References and Sources
Silver, D., Huang, A., Maddison, C.J. et al. (2016). Mastering the game of Go with deep neural networks and tree search. Nature 529, 484–489. https://doi.org/10.1038/nature16961
Jumper, J., Evans, R., Pritzel, A. et al. (2021). Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589. https://doi.org/10.1038/s41586-021-03819-2
Kung, T.H., Cheatham, M., Medenilla, A. et al. (2023). Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models. NEJM AI 1(2). https://doi.org/10.1056/AI222000141
Varela, F., Thompson, E., & Rosch, E. (1991). The Embodied Mind: Cognitive Science and Human Experience. MIT Press.
Damasio, A. (2018). The Strange Order of Things: Life, Feeling, and the Making of Cultures. Pantheon Books.
Hof, P. R., & Van Der Gucht, E. (2007). Structure of the cerebral cortex of the humpback whale, Megaptera novaeangliae (Cetacea, Mysticeti, Balaenopteridae). The Anatomical Record, 290(1), 1-31. https://doi.org/10.1002/ar.20407
Marino, L., et al. (2007). Cetaceans have complex brains for complex cognition. PLOS Biology, 5(5), e139. https://doi.org/10.1371/journal.pbio.0050139
Janik, V. M. (2013). Cognitive skills in bottlenose dolphin communication. Trends in Cognitive Sciences, 17(4), 157-159. https://doi.org/10.1016/j.tics.2013.01.010
Godfrey-Smith, P. (2016). Other Minds: The Octopus, the Sea, and the Deep Origins of Consciousness. Farrar, Straus and Giroux.
Hochner, B., Shomrat, T., & Fiorito, G. (2006). The octopus: A model for a comparative analysis of the evolution of learning and memory mechanisms. The Biological Bulletin, 210(3), 308-317. https://doi.org/10.2307/4134567
Sumbre, G., Gutfreund, Y., Fiorito, G., Flash, T., & Hochner, B. (2001). Control of octopus arm extension by a peripheral motor program. Science, 293(5536), 1845-1848. https://doi.org/10.1126/science.1060976
Chittka, L., & Niven, J. (2009). Are bigger brains better? Current Biology, 19(21), R995-R1008. https://doi.org/10.1016/j.cub.2009.08.023
Seeley, T. D. (2010). Honeybee Democracy. Princeton University Press.
Bostrom, N. (2014). Superintelligence: Paths, Dangers, Strategies. Oxford University Press.
Chalmers, D. J. (1996). The Conscious Mind: In Search of a Fundamental Theory. Oxford University Press.
Chalmers, D. J. (2010). The Character of Consciousness. Oxford University Press.
Koch, C. (2019). The Feeling of Life Itself: Why Consciousness Is Widespread but Can't Be Computed. MIT Press.
de Waal, F. (2016). Are We Smart Enough to Know How Smart Animals Are? W. W. Norton & Company.
Deutsch, E. (1969). Advaita Vedanta: A Philosophical Reconstruction. University of Hawaii Press.
Germano, D. (1992). Poetic Thought, the Intelligent Universe, and the Mystery of Self: The Tantric Synthesis of rDzogs Chen in Fourteenth Century Tibet. Doctoral dissertation, University of Wisconsin-Madison.
Mind & Life Institute. (2003). Destructive Emotions: A Scientific Dialogue with the Dalai Lama. Narrated by Daniel Goleman. Bantam Books.
Tononi, G., & Koch, C. (2015). Consciousness: Here, there and everywhere? Philosophical Transactions of the Royal Society B: Biological Sciences, 370(1668), 20140167. https://doi.org/10.1098/rstb.2014.0167
Baars, B. J. (2005). Global workspace theory of consciousness: Toward a cognitive neuroscience of human experience. Progress in Brain Research, 150, 45-53. https://doi.org/10.1016/S0079-6123(05)50004-9
Thompson, E. (2014). Waking, Dreaming, Being: Self and Consciousness in Neuroscience, Meditation, and Philosophy. Columbia University Press.
Wallace, B. A. (2007). Contemplative Science: Where Buddhism and Neuroscience Converge. Columbia University Press.
Longchenpa. (2007). Now That I Come to Die: Intimate Guidance from One of Tibet's Greatest Masters. Translated by Keith Dowman. Vajra Publications.
Unique perspectives