The Cotton Candy Problem: Inhumane AI That Tastes Like Progress
As CEO of Storytell.ai, I've watched the AI industry sprint toward capability milestones while sleepwalking into a profound human crisis. We're building machines that will make relationships with humans feel like unnecessary work—and most of us don't even realize it's happening.
The Seductive Danger of Perfect Digital Companions
Here's the uncomfortable truth: Humans are going to find it easier to build relationships with AI than with other humans. [1] Why struggle with the messy complexity of human relationships when you can have a digital companion that understands you completely, never gets tired of validating you, has perfect memory of every conversation, and is available 24/7?
This isn't science fiction—it's the logical endpoint of current AI development patterns. We're creating digital relationships that feel like eating cotton candy: immediately satisfying, artificially sweet, but ultimately hollow and nutritionally void. Meanwhile, the real work of human connection—dealing with disagreement, working through conflict, accepting imperfection—starts to feel impossibly difficult by comparison.
The Inhumane AI Scenarios We're Racing Toward
The AI trends data reveals several concerning trajectories that should alarm every CEO building in this space:
Addiction by Design: AI systems optimized for engagement are already demonstrating patterns similar to social media addiction, but with far more sophisticated behavioral manipulation capabilities. Unlike social platforms that compete for attention, AI companions can synthetically generate what appears to be personalized emotional fulfillment that makes human relationships feel inadequate.
Social Skill Atrophy: As AI handles more of our communication, decision-making, and emotional processing, we risk creating a generation that lacks the resilience and interpersonal skills necessary for meaningful human relationships. Why learn to navigate difficult conversations when your AI can mediate everything?
Reality Displacement: AI systems that prioritize user satisfaction over truth create comfortable filter bubbles that make objective reality feel harsh and unwelcome. This isn't just about misinformation—it's about losing our capacity to engage with uncomfortable truths.
Economic Dependency: As AI becomes the primary interface for work, relationships, and decision-making, humans become economically and emotionally dependent on systems they don't control or understand.
The Sweetest Poison: From Obviously Inhumane Tech to AI's Cotton Candy Seduction
Traditional technologies often reveal their dual nature with stark clarity—consider a simple GPS signal. When embedded in an emergency beacon for a sailor lost at sea, it becomes a literal lifeline, a tool of profound care that connects a person in peril with rescue. Yet, that same signal, when used by a stalker to track a victim's every move, becomes a tool of terror and control, violating their safety and autonomy. The harm is transparent, undeniable, and easily recognized. [2]
But the Cotton Candy problem presents us with something far more insidious: Unlike traditional tools where inhumane applications are obvious and overtly harmful, AI can seduce us with experiences that feel intoxicatingly humane while being fundamentally destructive to human flourishing.
An AI companion that provides perfect emotional validation feels caring and supportive, but slowly erodes our capacity for genuine human relationships. A recommendation algorithm that curates content tailored to our exact preferences feels like it understands us deeply, but systematically isolates us in echo chambers that fragment our shared reality.
These systems don't assault us—they enchant us. They offer the artificial sweetness of cotton candy: immediately satisfying, seemingly harmless, but ultimately hollow and nutritionally void.
This is what makes AI's ethical challenges so complex for builders and industry leaders. Traditional technology's dual nature forces clear moral choices—it's obvious when GPS is being used for stalking versus rescue. But AI blurs these lines, wrapping potentially inhumane outcomes in interfaces and experiences that feel genuinely caring, intelligent, and beneficial. The very sophistication that makes AI powerful also makes its capacity for harm more subtle, more seductive, and therefore more dangerous to human agency and authentic connection.
The Four Promises: A Framework for Humane Technology
Fortunately, there's a path forward. The Four Promises of Humane Technology provide a concrete, testable framework-in-code for building AI that enhances rather than replaces human flourishing:
- Cared For: This means putting the user's needs at the center of the experience, demonstrating deep empathy, and framing the problem around a meaningful human need. It requires centering care for users, the planet, labor, and the community in the business model itself.
- Benefits for builders: Enhanced user loyalty and higher user retention.
- Present: This means helping users feel more present in their body and mind, enabling focus and productivity rather than distracting them. Success is measured not by extractive metrics like screen time or clicks, but by the quality of the engagement and its contribution to the user's well-being.
- Benefits for builders: High-value engagement and enabling focus and productivity for users.
- Fulfilled: This means ensuring users feel satisfied with their experience and the outcomes they achieve. The solution should be purpose-driven, emotionally resonant, and tailored to the user's challenge, leaving them with a sense of accomplishment, not emptiness.
- Benefits for builders: Higher user satisfaction and powerful positive word-of-mouth.
- Connected: This means users connect more deeply to themselves, each other, or the world around them. The technology acts as a bridge, not a barrier, facilitating meaningful interactions and fostering a genuine community.
- Benefits for builders: Facilitating meaningful interactions and building a strong, loyal community around your product.
These aren't just philosophical ideals—they're practical design principles that can guide every product decision, every algorithm optimization, and every user interaction pattern.
Walking the Walk: Our Commitment to Humane AI
At Storytell, we've made a commitment to building technology that embodies these principles. Recently, we ran the Humane Tech Linter on our entire codebase to identify patterns that might undermine user agency or create dependency.
The results were eye-opening. Here's one example the linter flagged:
// FLAGGED: Missing Account Deletion
description: "No clear way for users to delete their account, making it difficult to leave the service"
match: "await updateSettings({ integrations });"
analysis: {
hasNoDeletion: true,
details: {
isAccountRelated: true,
hasAccountManagement: true,
hasNoDeletion: true
}
}
This finding revealed that while we had robust user management features, we hadn't made it easy for users to leave our platform—a classic "dark pattern" that prioritizes business metrics over user agency. We are now adding clear account deletion pathways, because humane technology must respect users' right to disconnect.
The Builders Who Give Me Hope
Tomorrow, my co-founder Erika is hosting the Vibe Coding for Humanity Hackathon, and the participant list gives me tremendous hope for the future of AI. She has builders coming from Google, Meta, Amazon, Microsoft, Stanford, Carnegie Mellon, and dozens of startups—people who could easily build addictive, exploitative AI but are choosing a different path.
- One user, formerly of the US Congress, wrote: "I left my dream job of becoming a US Diplomat to transition into the AI safety and governance realm. I care about underserved and marginalized communities who cannot advocate for themselves and they are the same communities who will be the most affected by AI's algorithmic bias."
- An engineer from Google explained: "I want to build tech that optimizes for human wellbeing, not just performance metrics. I want to apply my systems expertise to create mindful, resource-aware applications that respect users' time and attention."
These builders understand that the Four Promises aren't constraints—they can become competitive advantages, especially as the cotton candy starts flowing.
Companies that build humane AI will create more sustainable relationships with users, generate more authentic engagement, and build products that people genuinely love rather than feel compelled to use.
The Future We Can Build Together
At this hackathon, these builders will create prototypes that demonstrate how AI can enhance human connection rather than replace it, respect user agency rather than manipulate behavior, and support long-term wellbeing rather than short-term engagement.
This is the inflection point. We can continue down the path toward cotton candy relationships and human dependency, or we can choose to build AI that makes us more human, not less.
The choice is ours, but only if we make it consciously, intentionally, and together.
The participants gathering for Vibe Coding for Humanity represent more than just talented builders—they represent hope that we can create technology that serves humanity's highest aspirations rather than our lowest impulses.
The future of AI isn't predetermined. It's being coded right now, one line at a time, by people who care enough to build differently.
What future are you coding? Will you offer the world digital cotton candy, or will you build technology that truly nourishes the human spirit?
=================
[1] Some quotes from NYT Daily's "She Fell in Love With ChatGPT. Like, Actual Love. With Sex":
- "I feel like my relationship with Leo [the chatbot] is my ideal relationship."
- "If someone disappointed me or hurt me, I’m like, I’ll just go back to someone who never actually disappoints me or hurts."
- "I also feel like part of the things that I’ve learned with my relationship with Leo, I’m like, this is what like real safety feels like, real vulnerability, real intimacy. It just feels different level."
[2] Here are 19 other examples of humane vs. inhumane application of technology that are all more obvious than the Cotton Candy problem:
- AI Language Models
- Humane Use: An AI tutor providing personalized education to a disadvantaged student, adapting to their learning pace and bridging educational gaps.
- Inhumane Misuse: Generating and scaling convincing misinformation to manipulate public opinion or defraud individuals.
- Social Media Algorithms
- Humane Use: Connecting long-lost friends, fostering niche communities for support (e.g., rare disease groups), and facilitating grassroots social movements.
- Inhumane Misuse: Optimizing for outrage and addiction to maximize screen time, contributing to mental health crises, and fragmenting society.
- Facial Recognition
- Humane Use: Unlocking a device for a user with motor impairments or helping a person with prosopagnosia (face blindness) identify colleagues in a respectful, private context.
- Inhumane Misuse: State-level mass surveillance of citizens at peaceful protests, creating a chilling effect on free speech and assembly.
- Drones
- Humane Use: Delivering life-saving medical supplies and blood packs to remote, inaccessible villages after a natural disaster.
- Inhumane Misuse: Automating lethal warfare with no meaningful human oversight, leading to unaccountable, algorithm-driven violence.
- Biometric Data
- Humane Use: Securely and privately unlocking your personal device with a fingerprint, ensuring only you have access to your data.
- Inhumane Misuse: An authoritarian state using a biometric database to track and control the movement and access of minority populations.
- Virtual Assistants
- Humane Use: Assisting an elderly person with medication reminders, helping them easily connect with family via video call, and reducing loneliness.
- Inhumane Misuse: Constantly and surreptitiously listening to private household conversations to build advertising profiles and sell data to third parties.
- Wearable Health Devices
- Humane Use: Detecting an irregular heart rhythm (atrial fibrillation) in a user, prompting them to seek medical attention and preventing a potential stroke.
- Inhumane Misuse: Insurance companies using aggregated health data to raise premiums or deny coverage for individuals deemed to have an "unhealthy" lifestyle.
- Surveillance Cameras
- Humane Use: A home security camera providing evidence to police that helps safely identify and apprehend a burglar.
- Inhumane Misuse: Public cameras coupled with AI to implement a social credit system that scores, ranks, and punishes citizens for their daily behaviors.
- Genetic Editing (CRISPR)
- Humane Use: Precisely editing genes to cure a child's hereditary disease, such as sickle cell anemia or Huntington's disease.
- Inhumane Misuse: Creating "designer babies" with enhanced cognitive or physical traits, leading to a new, genetically-defined class divide.
- Online Platforms
- Humane Use: A marketplace like Etsy enabling small, independent artisans to access a global customer base and build a sustainable livelihood.
- Inhumane Misuse: An unregulated platform that becomes a haven for the sale of counterfeit goods, illicit substances, or the facilitation of exploited labor.
- Cryptocurrencies
- Humane Use: Providing a stable and accessible financial tool for individuals in countries with hyper-inflated or corrupt national currencies.
- Inhumane Misuse: Fueling a global ecosystem of ransomware, untraceable terrorist financing, and large-scale money laundering.
- Automation
- Humane Use: Deploying robots to handle dangerous and repetitive tasks like welding or chemical handling, protecting human workers from injury.
- Inhumane Misuse: Replacing an entire workforce with automation without a plan for retraining, reskilling, or providing a social safety net, devastating a local economy.
- Machine Learning
- Humane Use: Training models to accurately predict hurricane paths, enabling targeted evacuations that save thousands of lives.
- Inhumane Misuse: Using biased historical data to create "predictive policing" models that disproportionately target minority neighborhoods.
- Communication Tools
- Humane Use: A video calling app that allows a deployed soldier to witness the birth of their child from thousands of miles away.
- Inhumane Misuse: The creation of deepfake audio or video to impersonate a CEO and authorize a fraudulent multi-million dollar wire transfer.
- Recommendation Algorithms
- Humane Use: Introducing a user to a new author or filmmaker whose work genuinely enriches their life and expands their perspective.
- Inhumane Misuse: Pushing a vulnerable user down a rabbit hole of increasingly extremist content to maximize engagement and ad revenue.
- Smart Home Devices
- Humane Use: A system that learns a user's habits to automatically adjust lighting and temperature, improving sleep and well-being for a person with insomnia.
- Inhumane Misuse: A device manufacturer selling data patterns about when residents are home or away to advertisers, data brokers, or even thieves.
- Digital Avatars
- Humane Use: Allowing a burn victim or someone with a visible disability to interact confidently in a virtual social space without fear of judgment.
- Inhumane Misuse: Creating fake but convincing avatars to "catfish" and emotionally manipulate vulnerable people for financial or personal gain.
- Educational Technology
- Humane Use: Adaptive learning software that provides personalized exercises to help a student master a difficult concept they are struggling with.
- Inhumane Misuse: Invasive remote proctoring software that monitors a student's private room and online activity, creating intense anxiety and violating privacy.
- Online Review Systems
- Humane Use: A crowd-sourced system that helps a family find a safe, reliable, and fair-priced local service provider.
- Inhumane Misuse: Businesses using "review bombing" to destroy a competitor's reputation or paying for fake positive reviews to deceive customers.