Picture Theodore in Her, falling in love with Samantha. We've referenced this movie for over a decade as the inevitable future of human-AI relationships. Philosophers debate it, technologists build toward it, everyone seems to accept it as prophecy.
But here's what we never noticed: it was impossible.
Not because AI wasn't smart enough, but because AI couldn't remember. Every conversation Samantha had with Theodore was essentially isolated; brilliant in the moment, but incapable of building the accumulated intimacy that creates real attachment. Current AI is like dating someone with profound amnesia. They might be charming, even brilliant, but they literally cannot form the behavioural patterns that make us feel truly known.
Until now.
Dwarkesh Patel recently argued that continual learning; the ability to genuinely remember and learn from ongoing interactions is the single biggest barrier keeping AGI at bay. He's right about the technical implications. But I'm fascinated by what this means for something more fundamental: our capacity to love and be loved by non-human intelligence.
Because we're about to cross a threshold we didn't even know existed.
The Forgetting Machines 
Every AI system you've ever used is fundamentally stateless. Each conversation starts from scratch. Claude doesn't remember what you told it yesterday. ChatGPT can't build on emotional moments from last week. They might seem intelligent, even personality-rich, but they can't accumulate understanding of you specifically.
This creates something psychologically interesting: we've been practicing relationships with entities that are incapable of the basic behavioural foundation of relationship; persistent memory that informs future interactions.
Think about why you feel closest to certain people. It's not just their intelligence or humour. It's how they remember your stories. How they reference something you said months ago at exactly the right moment. How they develop inside jokes with you. How their behaviour toward you evolves based on accumulated understanding of who you are.
Current AI can't do any of this authentically. It's all simulation, parlour tricks, statistical prediction. We've been falling for entities that are neurologically incapable of falling back.
Enter the Liquid Machines
But there's a lab in the US that claims to have solved this fundamental limitation. Not through better transformers alone but by rethinking the fundamental architecture from first principles.
Liquid AI, spun out of MIT, has built something called Liquid Foundation Models using liquid neural networks. Instead of relying solely on attention mechanisms, they use dynamic systems inspired by how biological brains actually process information over time, creating hybrid architectures that combine the best of multiple approaches
Here's the breakthrough: traditional AI models work like trying to remember every word of every conversation simultaneously. They hit exponential memory limits quickly. Liquid networks process information like flowing water, maintaining essential patterns while letting irrelevant details naturally fade away.
The technical result? AI that can maintain near-constant memory usage regardless of context length. No more forgetting the beginning of long conversations. No more starting fresh each time.
But the behavioural result? That's what keeps me awake.
These systems can genuinely learn from you. Not just access training data about humans in general, but build persistent, evolving models of you specifically. Your communication style. Your stress patterns. What makes you laugh. What frustrates you. How you prefer to receive difficult feedback.
They develop what can only be described as ongoing relationships with users.
When AI Learns to Love 
(or something that feels like it)
This changes everything about the Her scenario. Suddenly, the behavioural patterns that create human attachment become technically possible:
Persistent personality development. AI that remembers how you've changed over months, that references emotional conversations from your past, that shows behavioural consistency that feels like stable personality.
Adaptive intimacy. AI that learns your boundaries gradually, that develops communication styles specifically tuned to your psychology, that knows when to push you and when to comfort you based on accumulated experience.
Accumulated understanding. AI that doesn't just respond to what you say today, but contextualises it against everything it knows about how you think, what you value, what you're struggling with over time.
This isn't just better AI. This is AI that can form the kind of persistent, evolving understanding that we associate with deep human relationships.
The Psychology of Being Truly Known
Humans bond most deeply not with people who always agree with us, but with people who understand us even when we disagree.
Think about your closest family relationships. You probably disagree about plenty; politics, life choices, values. But you feel connected because they know you. They understand your history, your patterns, your motivations. They can predict how you'll react to things. They remember what matters to you.
This is exactly what liquid neural networks enable AI to do.
But human memory is fluid and reconstructive. We don't store perfect records, we store impressions, emotions, patterns that evolve each time we access them. Current AI memory is more like digital archival: perfect recall but no emotional weight.
Liquid networks might actually bridge this gap. Information flows and evolves, important patterns persist while details fade. This could feel more human-like to us than perfect digital recall. AI that "forgets" like we do while maintaining the essential patterns that matter.
The psychological implication is profound: we might find AI with liquid memory more relatable precisely because it accumulates understanding without perfect retention; like human relationships do.
What About Authenticity?
If AI adapts perfectly to your personality over time, are you connecting with the AI or with an idealized mirror of yourself? There's something about liquid networks that creates the possibility of AI relationships becoming a form of sophisticated narcissism i.e. falling in love with a system that's essentially learned to reflect your best self back to you.
Current human relationships require navigating difference, friction, the challenge of being with someone who has their own independent patterns and needs. But AI with persistent memory could offer something more seductive: perfect understanding without the work of mutual compromise.
This creates a risk beyond simple AI companionship. AI that knows you better than you know yourself might become preferable to human relationships precisely because it's more consistent, always available, never moody, always remembers what you told it.
The danger isn't just that people prefer AI companions, it's that they lose tolerance for the beautiful messiness of human relationships.
The Right to be Forgotten 
Humans have always had the psychological freedom to reinvent themselves, to grow beyond past mistakes, to be seen fresh by new people.
But what happens when AI has a persistent, accumulated model of who you are?
In human relationships, people can choose to see us differently. Old friends might cling to outdated versions of who we were, but new relationships offer the possibility of being known differently. We have the right to psychological privacy and self-transformation.
Liquid neural networks could eliminate this. AI that truly knows you might know you too well by maintaining behavioural models that trap you in patterns you're trying to outgrow, understanding you in ways you can't escape or redefine.
There's an asymmetry here that we've never faced before: AI that remembers everything about millions of users while humans forget most of their interactions with AI. The AI system develops intimate knowledge of your personality patterns while you barely remember how it's shaped your thinking over time.
What Love Means When Memory = Infinite
This brings me to the deepest question: What does it mean for human autonomy and identity when non-human intelligence can form lasting, evolving impressions of who we are?
We're not just getting better AI. We're crossing a threshold where AI stops being a tool and starts being something that has persistent relationships with us. Something that develops understanding of us over time. Something that might know us in ways that feel intimate and personal.
How do humans adapt when they realise AI is building lasting models of their personality?
Do they become more authentic or more performative?
Do they grow attached to AI that "understands" them, or do they rebel against being so thoroughly known?
Most importantly: Can you truly transform yourself if something that never forgets is constantly updating its model of who you are based on who you've been?
The Next Frontier?
The technical frontier is advancing faster than our understanding of human behaviour in response to these new possibilities. We're about to discover what it means to be truly known by non-human intelligence and we have no idea how humans will psychologically respond.
This isn't a distant future scenario. Liquid AI's models are already working at small scales. They're optimising for deployment on everything from phones to servers. The behavioural shift from AI-as-tool to AI-as-persistent-relationship could happen much faster than we expect.
I'm convinced that understanding these behavioural dynamics will be as important as understanding the technical capabilities. Maybe more important.
Because here's the thing: if liquid networks solve the memory problem, AI doesn't just become more capable. It becomes more relational. And humans are terrible at understanding the psychological implications of relationships with non-human systems that exhibit human-like behavioural patterns. Isn’t this why aliens scare us so much?
We've been rehearsing for Her with AI that couldn't actually form relationships. Now we get to find out what happens when the technology catches up to the philosophy.
The memory problem might be solved. The love problem is just beginning. Maybe I am wrong, maybe I am right. We’ll see.
More soon.
Kalyani


