AI for Human Flourishing
Below is some thinking and writing I did during the AI for Human Reasoning FLF Fellowship.
1: The Coordination Stack
The Recursive Pattern
Human coordination follows a remarkably consistent pattern across every scale—from cells cooperating to form organs, to nations attempting to avoid nuclear war. Whether we’re examining internal psychological alignment or global climate agreements, the same fundamental challenges appear:
1. Internal Alignment
- Before any entity can powerfully coordinate with others, it must first achieve internal coherence.
- A person plagued by conflicting desires can’t reliably partner with others.
- A company with misaligned departments can’t execute effectively.
2. Credible Signaling
- Once aligned, entities must broadcast their state, values, and intentions in ways others can verify.
- A potential friend must demonstrate trustworthiness.
- A startup must prove its viability to investors.
3. Search & Discovery
- Aligned agents must find each other in a vast sea of noise.
- Two compatible co-founders might never meet.
- Innovative researchers work in parallel, unaware of each other’s breakthroughs.
4. Preference Elicitation & Trade-off Search
- Even aligned entities have different contexts, capabilities, and priorities
- Two friends must navigate different life goals.
- Allied nations have distinct cultural values and resources.
5. Shared World Model
- Coordination requires shared understanding of what’s actually true.
- Partners need to agree on market conditions.
- Teams need common mental models.
6. Forecasting & Prioritization
- Groups must predict which actions will achieve their goals and matter most.
- A couple planning their future, a startup choosing its product direction, or humanity deciding how to handle AI—all require forecasting what’s important and achievable.
7. Resource Allocation
- Even perfectly aligned groups must decide who gets what, when.
- Companies budgeting between departments, or nations sharing water rights—all face allocation dilemmas.
8. Ongoing Iteration
- Coordination isn’t static. Reputation must be tracked, commitments verified, strategies updated.
- Trust builds or erodes based on observed behavior.
- When coordination breaks down, groups need conflict resolution mechanisms to repair and rebuild—or they fracture permanently.
The Fundamental Bottlenecks
At each level of this stack, predictable bottlenecks emerge:
Signal Decay at Scale
- Deep, high-bandwidth connections don’t scale. You can deeply know 5 people, somewhat know 50, but barely know 500.
- Each hop in a trust network degrades the signal. This is why small teams can outperform large organizations, why movements fragment, and why global coordination seems impossible.
Verification Costs
- Determining whether signals are genuine requires time, attention, and repeated interactions.
- It’s expensive to verify if someone is truly aligned versus skillfully deceptive.
- This cost compounds exponentially with group size.
Bandwidth Limitations
- Humans can only process so much information
- A two-person conversation can explore nuanced values and complex plans. A 20-person meeting devolves into sound bites. A million-person movement reduces to slogans.
- Every individual carries a lifetime of experiences, traumas, hopes, and mental models—merging these contexts into shared understanding is computationally intractable with current human communication bandwidth.
Search Friction
- Finding aligned collaborators in a world of 8 billion people is monumentally difficult.
- Most potential collaborators never discover each other, most resources aren’t matched even close to optimally
Developmental Mismatches
- People operate from fundamentally different worldviews and cognitive stages.
- Someone who needs rules and hierarchy can’t coordinate with someone seeking radical emergence.
- Traditional institutions clash with disruption-oriented startups. These aren’t just disagreements but incompatible operating systems for coordination itself.
Time Horizon Conflicts
- A day trader thinks in minutes, a startup in months, a government in election cycles, climate change in centuries.
- These mismatched timescales make coordination nearly impossible—what’s urgent for one is irrelevant for another.
Power Asymmetries
- Coordination assumes rough equality, but reality features massive disparities in resources, options, and influence.
- The wealthy can wait out the poor. The powerful can change the rules. True coordination becomes coercion.
Moloch/Multipolar Traps
- Race dynamics where everyone knows cooperation would be better, but competitive pressures force defection.
- AI labs racing toward AGI, nations stockpiling weapons, companies externalizing costs—all are trapped in games where unilateral cooperation means losing.
Meta-Coordination Overhead
- The cost of deciding HOW to coordinate often exceeds the coordination itself.
- Teams spend more time discussing process than doing work.
- International bodies debate voting mechanisms while crises unfold.
Path Dependence
- We’re locked into suboptimal coordination patterns by switching costs and network effects.
- QWERTY keyboards, fossil fuel infrastructure, two-party political systems—even when everyone agrees better options exist, the transition cost prevents change.
Why Goodness Doesn’t Automatically Spread
Given these bottlenecks, we can understand why beneficial coordination often fails while harmful coordination sometimes thrives:
Asymmetric Ease
- Destruction is easier than creation. It takes years to build trust, seconds to destroy it.
- One defector can poison a high-trust community. One terrorist can undo decades of peace-building. This fundamental asymmetry favors harmful actors.
Threat Salience Often Outcompetes Opportunity Salience
- Shared enemies create stronger signals than shared hopes.
- Our brains weight threats more than opportunities—an evolutionary survival mechanism that now undermines positive coordination.
- It’s easier to unite against something than for something.
Small, Dedicated Groups Often Outcompete Large, Diffuse Ones
- A focused company extracts value from millions of loosely coordinated consumers.
- A small extremist cell destabilizes a peaceful society.
- Drug cartels outmaneuver governments.
Parasitic Strategies Evolve Faster
- Scammers adapt quicker than institutions.
- Viruses mutate faster than vaccines.
- Defense requires protecting everything; offense requires finding one weakness.
Externality Blindness
- Harmful coordination often succeeds by pushing costs onto those without voice or power—future generations, animals, the global poor, the environment.
- Factory farming causes the suffering of billions who cannot participate in the coordination discussion.
Information Asymmetry as Weapon
- Bad actors deliberately maintain confusion, complexity, and fear.
- Disinformation is cheaper than education.
- Doubt is easier to sow than knowledge.
- The tobacco industry’s playbook—manufacture uncertainty—now replicated everywhere.
Status Games Reward Competition
- Humans play zero-sum status games even when positive-sum cooperation is available.
- Academic publishing, social media engagement, corporate ladder climbing—these systems reward individual achievement over collective progress.
Network Effects Entrench Extraction
- Even harmful systems persist because switching costs are prohibitive.
- Everyone hates health insurance companies, but changing the system requires coordinating millions simultaneously.
- Platforms become monopolies. Corrupt institutions become “too big to fail.”
Defection is Locally Optimal
- Even when cooperation would benefit everyone long-term, the immediate incentive is often to defect.
- The tragedy of the commons plays out fractally at every scale—from roommates avoiding dishes to nations avoiding carbon taxes.
This recursive pattern and its bottlenecks explain why, despite tremendous technological progress and material abundance, we still see massive coordination failures: hundreds of millions of animals suffering in factory farms, existential risks from AI, and billions of humans falling short of their potential.
The problem isn’t lack of resources or good intentions—it’s the fundamental difficulty of coordination itself.
2: The Pattern in Practice - Successes, Failures, and Mixed Cases Across Scales
Level 1: Internal Coordination (Within One Person)
✅ Meditation/Therapy
Technologies like Buddhist mindfulness, Internal Family Systems, and MDMA-assisted therapy achieve internal alignment by giving different parts of the psyche voice and integration. The key: creating internal bandwidth through deliberate attention.
❌ Addiction/Trauma
When fear or pain hijacks the system, internal parts work at cross-purposes. The addicted person simultaneously wants and doesn’t want to use. Trauma fragments the self. Without intervention, these states self-perpetuate.
🤷🏻Most Humans
We achieve partial alignment through habits, routines, and occasional reflection, but remain internally conflicted. We procrastinate, self-sabotage, and act against our stated values daily.
Level 2: Dyadic Coordination (Two People)
✅ Great Marriages/Co-founders - Work through high-bandwidth communication, aligned incentives, and iteration. Successful couples develop private languages. Great co-founders (Jobs/Wozniak) complement skills while sharing vision.
❌ Divorce/Breakups - 50% of marriages fail due to value drift, communication breakdown, or betrayed trust. Most co-founder relationships implode. The intimacy required for deep coordination also creates vulnerability to deep harm.
🤷🏻Most Relationships - Function adequately through compromise and conflict avoidance rather than true alignment. Good enough to persist, not good enough to flourish.
Level 3: Small Group Coordination (5-20 People)
✅ Elite Teams - Navy SEALs, early-stage startups, bands like The Beatles succeed through intense selection, shared adversity, clear roles, and constant communication. Small enough for everyone to model everyone else.
❌ Committees - Most committees produce compromise documents nobody believes in. Groupthink, diffusion of responsibility, and politics override truth-seeking. “A camel is a horse designed by committee.”
🤷🏻Friend Groups - Coordinate social activities successfully but rarely achieve productive goals together. The coordination is real but limited to low-stakes domains.
Level 4: Large Group Coordination (50-500 People)
✅ Functional Companies/Congregations - Successful through hierarchy, culture, and ritual. Early Google maintained coordination through OKRs and radical transparency. Churches coordinate through shared narrative and weekly synchronization rituals.
❌ Bureaucracies - Most large organizations become sclerotic. Goodhart’s Law destroys metrics. Middle management creates telephone games. Everyone knows the system is broken but no one has the ability or cares to fix it.
🤷🏻Universities - Successfully coordinate education through departments and degrees, but fail at interdisciplinary innovation. The same structures that enable teaching prevent research breakthroughs.
Level 5: National Coordination (Millions)
✅ Nordic Model/Singapore - High-trust societies with aligned values, functional institutions, and feedback loops. Denmark coordinates through cultural homogeneity and strong safety nets. Singapore through competent technocracy.
❌ Failed States - Somalia, Syria, Venezuela show what happens when coordination completely breaks down. No monopoly on violence, no shared reality, no trust in institutions.
🤷🏻Democracy - Partially solves the succession problem (peaceful transitions) and error correction (voting out failures), but creates short-term thinking, polarization, and manipulation through fear. Better than monarchy, far from optimal.
Level 6: Global Coordination
✅ Scientific Method/Internet Protocols - Science coordinates global knowledge production through reproducibility and peer review. TCP/IP coordinates billions of devices through simple, robust standards. Wikipedia aggregates human knowledge through minimal viable rules.
❌ Existential Risks - Nuclear & biological weapons, climate change and factory farming, risky race with AI.
🤷🏻Global Markets - Successfully coordinate production and distribution of goods worldwide, lifting billions from poverty. But also create races to the bottom, exploitation, and systematic externalities.
Revealed Patterns
At every level, successful coordination shares features:
- Aligned incentives (marriage vows, equity compensation)
- Verification mechanisms (scientific reproducibility, blockchain)
- Communication protocols (private languages in couples, OKRs in companies)
- Shared narratives (religious stories, national myths)
- Error correction (therapy for individuals, democracy for nations)
Failures share opposite features:
- Misaligned incentives (addiction, principal-agent problems)
- No verification (fake signals, cheap talk)
- Communication breakdown (trauma fragmenting the self, bureaucratic telephone)
- Competing narratives (polarization, civil war)
- No error correction (dictatorships, failed states)
3: The AI Breakthrough - How Abundant Intelligence Dissolves Ancient Bottlenecks
For the first time in history, we’re about to have abundant, cheap, human-level (and beyond) intelligence that can be deployed at massive scale. It’s a meta-tool that can solve coordination problems we’ve struggled with since the dawn of civilization.
How AI (can) Dissolve Each Fundamental Bottleneck
For each bottleneck, there could be many solutions. Here are a few examples to get a sense of what is possible:
Signal Decay at Scale → Persistent, Shareable Context
- AI can maintain persistent context across unlimited relationships within privacy and consent bounds. Your AI knows exactly what you told it about your values two years ago, and can share that context with another person or AI while respecting privacy constraints.
- In practice: AI delegates that carry your complete context into every negotiation, maintaining high-bandwidth connection even across thousands of interactions.
Verification Costs → Behavioral Authentication
- AI can analyze patterns across vast behavioral datasets to verify authenticity. Fake signals become computationally expensive to maintain consistently.
- In practice: Reputation systems that track follow-through across all commitments, working toward quantifiable trust. Note that Sybil resistance, gaming prevention, and cross-context portability remain open problems requiring ongoing research.
Bandwidth Limitations → Compressed Communication
- AI can compress lifetimes of context into efficient exchanges, translate between different communication styles, and expand brief inputs into full proposals.
- In practice: “Values handshakes” where hour-long AI-mediated conversations capture what would take months of human interaction to discover.
Search Friction → Intelligent Matching
- AI can search across billions of potential collaborators, understanding deep compatibility beyond keywords.
- In practice: Finding the perfect co-founder from a pool of millions, or connecting patients with exactly the right specialist based on subtle pattern matching.
Developmental Mismatches → Universal Translation
- AI can translate between different worldviews, explaining concepts in the listener’s native framework while preserving meaning.
- In practice: AI mediators that help traditional institutions collaborate with startups by translating between their incompatible operating systems.
Time Horizon Conflicts → Temporal Bridging
- AI can model outcomes across multiple timescales and find interventions that work across different horizons, though uncertainty grows with temporal distance. Multi-horizon plans with decision review gates handle this uncertainty better than attempting omniscient long-term prediction.
- In practice: Climate proposals that satisfy both quarterly earnings needs and century-long sustainability goals.
Power Asymmetries → Capability Amplification
- AI democratizes intelligence and coordination capacity. Small actors get the same strategic modeling as large ones.
- In practice: Individual citizens with AI assistance negotiating with corporations on equal cognitive footing.
Moloch/Multipolar Traps → Simultaneous Coordination
- AI can explore vast solution spaces to find escape routes from competitive traps, and coordinate synchronized transitions.
- In practice: AI systems finding “safe” development paths where all AI labs can slow down together without any one falling behind.
Meta-Coordination Overhead → Process Automation
- AI can handle the logistics of coordination—scheduling, protocol design, information routing—leaving humans to focus on decisions.
- In practice: AI automatically generating and testing governance structures, finding optimal voting mechanisms in minutes instead of decades.
Path Dependence → Migration Assistance
- AI can manage complex transitions by handling the cognitive overhead of switching systems while maintaining backward compatibility.
- In practice: AI managing individual transitions from fossil fuels to renewables, customizing the switch for each person’s specific situation.
Solving Specific Failures at Each Level
Level 1 (Internal): AI therapists providing 24/7 Internal Family Systems work, helping integrate traumatized parts and align conflicting goals through unlimited patience and perfect memory.
Level 2 (Dyadic): AI relationship counselors that understand both partners completely, identifying miscommunications in real-time and suggesting win-win solutions neither party would have seen.
Level 3 (Small Groups): AI facilitators that give every team member equal voice, surface hidden wisdom from quiet members, and synthesize discussions into clear decisions without groupthink.
Level 4 (Large Groups): AI governance systems that can actually model all stakeholders’ needs simultaneously, making bureaucracies responsive and adaptive instead of sclerotic.
Level 5 (National): AI systems that make direct democracy scalable through intelligent aggregation of citizen preferences, and long-term planning tools that escape electoral short-termism.
Level 6 (Global): AI coordination platforms that make the suffering of animals and future generations visible in real-time decision-making, and find profitable paths to solving climate change and other commons problems.
The Unprecedented Opportunity
- Infinite patience - AI mediators that can work through conflicts for as long as needed
- Perfect memory - Never losing context or forgetting commitments or actions
- Massive parallelism - Exploring millions of solution options, having millions of conversations, or taking millions of actions simultaneously
- Emotional attunement - AI that can be simultaneously optimized for each participant’s communication style and context
- Auditable procedures - AI systems with transparent governance and procedures for neutrality claims, recognizing that neutrality itself is contextual
- Scalable customization - Personalized coordination support for billions simultaneously
- And more?
For the first time, we can have coordination tools that are simultaneously high-bandwidth and massively scalable—breaking the fundamental tradeoff that has limited human coordination since the beginning.
Misuse Risks and Guardrails
Many coordination capabilities carry significant misuse potential that must be addressed proactively:
Reputation Systems
- Social credit authoritarianism, punishing dissent, locking people into past mistakes
Intelligent Matching
- Filter bubbles reinforcing extremism, discriminatory sorting, surveillance capitalism
AI Mediators & Negotiation Delegates
- Manipulation through emotional exploitation, coercion disguised as mediation, concentration of power
Behavioral Authentication
- Invasive surveillance, behavioral profiling for control, false positive punishment
4: The Fellowship - Building Coordination Infrastructure for Humanity’s Critical Decade
Why This Fellowship, Why Now
We stand at an unprecedented confluence of crisis and capability. The coordination failures outlined in Section 2 are reaching critical points. Meanwhile, the AI breakthroughs described in Section 3 are arriving not in decades but in a small number of years.
The window is narrow. In the next 2-8 years, AI will either:
- Exacerbate existing coordination failures through surveillance, manipulation, winner-take-all dynamics, etc.
- Or become the infrastructure for unprecedented human flourishing
The FLF Vision: Small Teams, Big Leverage
The Future of Life Foundation recognizes a crucial insight: while trillion-dollar companies race to build more powerful AI, almost nobody is building the coordination layer—the tools that ensure AI amplifies human wisdom.
This fellowship exists to:
-
Find the Builders - Gather people who understand both the technical possibilities and the human needs, who can bridge the gap between AI capability and coordination reality.
-
Prototype the Future - In 12 weeks, create working demonstrations that prove these tools are possible. Not whitepapers or theories, but actual software people can use.
-
Seed the Ecosystem - These prototypes become existence proofs that attract talent, funding, and adoption. Each successful tool makes the next one easier to build.
The Approach
Start Small, Think Big - We’re not trying to solve global coordination immediately. We’re building tools for specific users with real needs: policy makers who need better forecasting, teams struggling with decision-making, communities seeking consensus. These become the primitives for larger coordination.
Living Lab Philosophy - We rapidly prototype, test with real users, and iterate based on actual feedback. Whether it’s government officials using forecasting tools or AI labs testing negotiation systems, we ground our work in reality.
Flexible Funding & Distribution - We’re agnostic about organizational structure. Some projects might become VC-funded startups that scale rapidly. Others might work best as grant-funded public goods. What matters is impact—getting these tools to whoever needs them through whatever path works.
Strategic Acceleration - We target high-leverage opportunities where even small time advantages matter. Building a coordination tool 6 months before it would naturally emerge could shift entire industries. We look for neglected areas, but also race to implement obvious ideas before they’re captured by extractive actors.
What Success Looks Like
Immediate: Working prototypes validated by actual users. Policy makers using our forecasting tools to make better decisions. AI labs testing negotiation systems for safety agreements. Teams reporting measurably better outcomes from our decision-support tools.
Near-term: These tools spreading through natural adoption. Each successful deployment becomes a case study. Some fellows launch funded startups. Others create widely-adopted open tools. The diversity of approaches strengthens the ecosystem.
Medium-term: Infrastructure that transforms how groups operate. Benchmarks that measure coordination quality. Datasets that train the next generation of coordination AI. Multiple successful companies and nonprofits emerging from fellowship projects.
Long-term: These tools becoming fundamental infrastructure—as essential as email or spreadsheets. AI labs using coordination tools to escape race dynamics. Governments engaging citizens through AI-mediated deliberation. Communities making decisions that actually represent everyone, including those without voice.
The Call to Action
The recursive pattern of coordination—from internal alignment to global cooperation—has constrained humanity for millennia. We’ve documented the bottlenecks, witnessed the failures, and glimpsed the successes. Now, for the first time, we have tools powerful enough to dissolve these ancient limitations.
But tools don’t build themselves. AI won’t automatically solve coordination—it could just as easily entrench existing failures. The difference between dystopia and flourishing isn’t the capability of our AI, but the wisdom of our applications.
This fellowship is a bet that a small group of thoughtful, committed citizens can play a part in building the coordination layer for humanity’s future. Not through grand proclamations or regulatory capture, but through practical tools that work, spread, and compound—whether as startups, nonprofits, or public goods.
The question isn’t whether AI will transform human coordination. It’s whether we’ll guide that transformation toward flourishing or failure.
The coordination stack awaits. Let’s build it.