Jim Rutt on the Win-Win podcast
Date: October 2025 Type: Podcast Transcript: View full transcript
Summary of Jim Rutt Podcast Transcript
Section 1: Technology Addiction & Digital Hygiene (00:00-09:00)
Jim’s Points:
- Failed to maintain flip phone transition after 6 months in 2019 - smartphones are “addictive, useful and handy”
- Maintains tech hygiene practices: notifications off, avoids phone in bedroom, prefers web over apps
- Takes 6-month social media sabbaticals (July 1 - January 2) consistently for 6 years
- Key benefit: stops “background processing” on internet arguments, frees mental energy
- Smartphones and social media use intermittent reward/dopamine mechanisms, exactly like slot machines
- Facebook had 700+ job listings with “psychology” in them (2019) - deliberately engineered addiction
My (Matt Brooks) Take: I definitely believe that technology can be very addicting, and digital hygiene is key. I think too many people are wasting too much time that is not benefiting them on reels and TikTok, and Sora is obviously the next step where it’s not even human-generated, it’s AI-generated. I don’t think the idea of a 6-month social media sabbatical or a flip phone is scalable at all. I’m not sure what the answer is, but you’re going against a lot of people with vested interests to get people’s attention on these feeds, and there won’t be an easy solution. But I agree it’s a problem, and I’ve tried to deal with it with different custom blockers and having my phone automatically turn to black and white after 6:00 p.m. and things like this, but obviously that’s not scalable for other people. So I think this is a big open problem.
Section 2: Slot Machines & Psychological Manipulation (09:00-14:00)
Jim’s Points:
- Slot machine companies employ 200-1000+ PhD psychologists (rat torturers) to optimize addiction
- Visceral reaction to slot machines: “like sticking your face in a disc sander”
- His mother was addicted despite being intelligent - chose to enter “reference frame of the game”
- Key insight: Loss possibility is essential - free machines at home wouldn’t work
- Lower hierarchical complexity makes people more susceptible to these games
Section 3: Career Awakening to Game A Problems (14:00-20:00)
Jim’s Assertions:
- 1994 was pivotal year - discovered sustainability problems through The WELL community
- Major epiphany: “Playing an infinite game on a finite planet - that’s fucked, can’t work”
- Markets are “one of humanity’s greatest inventions” - superior to force/feudalism
- But around 1925, Edward Bernays (Freud’s son-in-law) invented scientific advertising via book “Propaganda”
- Critical shift: Market changed from serving customer needs to manufacturing needs and programming consumers
- “When I’m growing wheat and you’re growing wine, trade is benign. But if I can beam messages into your brain saying you need pink sunglasses and sparkly flip flops, suddenly we have a totally different kind of market.”
Section 4: Manufactured Need vs. Real Innovation (20:00-26:00)
Jim’s Views:
- Real innovations solve actual problems: wheelbarrow, railroad, electric drill - didn’t need advertising
- Humans always played status games, but Bernays + money = psychologically programmed status games
- Three “adverse gods” of modern system: Moloch (multipolar traps), Mammon (money worship), Hypnos (false beliefs)
- All driven by “money on money return” as the inner loop coordination mechanism
- Advertisement aggregates not just what people need, but what they’ll buy quickly for profit
My (Matt Brooks) Take: I mostly agree with this section. My only take is that advertisement and marketing is not always bad. It’s hard to discern the good from the bad, and most of it might be bad. But if I somehow see some advertising that shows me a product or service that I end up buying and I like, and I get positive ROI from what I put into it, then that advertising or marketing is fine. It’s like a signal that a good company has honest and fair advertising, and they’re selling a product or service that you actually benefit from. If you think you’re getting more value than the amount of money you had to put in, then that’s a great signal to receive. I think it’s just somehow really hard to serve really great marketing signals, and obviously a lot of companies will want to manufacture demand and also have bullshit marketing, but not 100%.
Section 5: Status Games & Market Dynamics (26:00-36:00)
Jim’s Analysis:
- TVs were status symbols in the past, now purely utility - status game evaporated
- Cars still major status game (Range Rover, G Wagon = “douche bag tattoo”)
- Little profit in cheap cars, huge profit in status cars (Porsche worth more than Volkswagen)
- Luxury goods industry is “revolting” - $1000 purses, $20,000 watches
- Interestingly, no “Rolex of phones” emerged - possibly due to Steve Jobs’ egalitarianism
- Status games have shifted to digital realm (social media followers) for younger generations
Section 6: Game B Definition & Core Concepts (36:00-48:00)
Jim’s Framework:
- Game A: Status quo of late-stage modernism, financialized capitalism, driven by money-on-money return
- Game B: Alternative social operating system applying family/friendship principles at all scales
- Key principle: Multi-dimensional tokens (not just money) - example: “bluefish coins” for sustainable harvest limits
- Requires both institutional change AND personal capacity development - they scaffold each other
- Must start “tiny and local” in “membranes” (semi-permeable boundaries)
- Coherent pluralism: Different membranes can have different rules within shared top-level accords
- Subsidiarity: Push decisions down as low as possible
My (Matt Brooks) Take: I totally understand the concept of the bluefish coin and multi-dimensional tokens, but if they actually believe that you need real cryptocurrencies for every limited resource, I think that’s going to be really hard to have people use. Obviously you would need AI agents; everyone would have to have a personalized AI agent that somehow managed your wallet because you’re not going to manage 1,000 different currencies yourself. It would take too much time and effort, and it just wouldn’t be worth it. But crypto uptake among real people for daily things is basically still zero percent, and it will probably be very few people over the next three to five years. So I don’t think this method is really scalable, or it’s not going to work. I totally understand the point and I like the idea of thinking of different market mechanisms and different ways of providing signal and sharing resources and limiting the effects that we have on at-risk ecosystems. But if the solution is actually crypto coins, I think there’s kind of no way.
Section 7: Why Game A is Self-Terminating (48:00-59:00)
Jim’s Arguments:
- Game A started ~1694 (Bank of England) with 1/120th current human impact on planet
- Exponential growth + finite boundaries = inevitable crash
- Since ~1975, institutions have been degrading human capacity (despite higher education levels)
- Downward spiral: worse humans → worse institutions → worse humans
- Carbon tax proposal (signed by 300 economists) is “within 10% of right answer” but gets no hearing anywhere
- Companies act as “obligate sociopaths” due to fiduciary duty to short-term profit
- Finance layer (pension funds, mutual funds) is “even more intensely Moloch” - utterly fungible
Section 8: Planetary Boundaries & Degrowth (59:00-1:35:00)
Jim’s Position:
- Should aim for ~4,000 watt society (Portugal level) - currently US is 11,000 watts
- Global population will peak around 2100 at 10-11 billion
- Real pressure: rising GDP per capita, not just population
- Rejects “degrowth” term - distinguishes between “good growth” (iPhones, information, intricacy) vs “gross growth” (bulk materials, fast fashion)
- Examples: Long-lasting embroidered clothes telling life stories vs. 6-week fast fashion cycle
- “Information wants to be free” - marginal cost is low, can make everyone richer in “captured intricacy”
- Requires both personal change AND scaffolding micro-society - neither alone is sufficient
My (Matt Brooks) Take: I’m not convinced that Game A is self-terminating. I understand the point of exponential growth and finite boundaries, but I think with AI, it’s actually possible to somehow generate more power and electricity than people want to consume on a yearly increased basis. I actually do think it’s possible that everyone in the world can have the same level of current creature comforts as Americans and still be fine. And I think even more than that. So I’m definitely against this 4,000-watt society thing. I think people won’t go for it. I don’t think it’s the right thing to shoot for.
I understand the difference of degrowth meaning have less things vs. degrowth meaning value more intricate, meaningful, important things that last. I just think that it’s not a memetically positive story—no one ever wants to have less when they’ve had more. I understand that I am definitely not someone that wants fast fashion and buys status symbols and wastes money. I spend very little money and I totally understand the point of more meaning might mean less things or less things that are better or more personal things. I totally understand on a personal level and I totally subscribe to it. I just don’t think that you’re going to get there on a societal level in the next five years.
And I think with AI and energy advancements we should instead be pushing to actually—if we just put solar everywhere we can and we put fission everywhere we can and we work on fusion—it’s not a problem about how much energy we create. It’s obviously how much CO2 we put in the atmosphere. It’s global warming, and it’s about how many resources we have to consume in a bad way to create the energy. I just don’t think it’s hopeless to do that. And I would guess that the median Game B supporter lives an 11,000-watt lifestyle and would find it very hard to go down to a 4,000-watt lifestyle.
Section 9: Cities & The Civium Concept (1:20:00-1:24:00)
Jim’s Hypothesis:
- Cities have positive scaling laws: 15% increase in patents, crime, etc. per doubling
- Jordan Hall’s “civium”: Capture positive returns of scale through global fractal networks
- Don’t need to live in cities to collaborate globally (e.g., Gravity Observatory took 30 years worldwide)
- Big cities give him “heebie-jeebies” - expects they’ll “gradually evaporate” in Game B
- Young people will still need to travel and explore between communities
- Life cycle approach: exploration when young, settling later
My (Matt Brooks) Take: I understand the points that cities have positive scaling laws across the good and the bad. I just think that we could, in theory, with technology limit the bad. But regardless, I don’t think that cities are only there because people have to move there for the opportunities. I think some people, like Liv points out, when they’re young, just enjoy living in cities. It’s more chaotic in a good way, more energetic, more exposure to ideas. And sure, a lot of that can go in AR and VR and online (now online you could be exposed to more communities and ideas), but there’s something about being there in person—the somatically energetic environment, the bustling energy, the city that never sleeps.
I definitely don’t believe that most people don’t wish they were in a city and they wish they could get the same job remote in a small town. Like no, I think a lot of people actually just like living in a city. So I don’t expect that cities will cool down when Game B starts to take off or something. And I think it’s just his personal bias, like he said that when he goes into a city he feels very off or something, or he gets the heebie-jeebies. Yeah, I mean some people just hate cities, and that’s fine. I just think you’re different than the people that love cities.
Section 10: Machine Consciousness (1:54:00-2:17:00)
Jim’s Views:
- Advisor to California Institute for Machine Consciousness (CIMC.ai)
- Two reasons to pursue: (1) Better experimental substrate for studying consciousness, (2) “De-blathering” the discourse
- Rejects hard problem of consciousness - 80% confidence in materialist/functionalist theories
- Consciousness ≠ Intelligence (bacteria are intelligent, Amazon rainforest is intelligent but not conscious)
- Consciousness requires specific architecture creating unified sensory scene
- Machine consciousness will be analogous not congruent - like industrial digesters vs. human digestion
- Can probably create AGI/ASI without consciousness - “maybe we should”
- Strongly opposes AI that convinces people it’s human - should be a felony
Section 11: Information Agents & The Trillion Dollar Opportunity (1:46:00-1:54:00)
Jim’s Proposal:
- Personal AI agents that filter information based on user preferences + some serendipity
- Include human curation workstations (paid curators feeding networks)
- Negotiated sense-making: Agents communicate with each other to collectively curate
- Built his own Substack curator (117 subscriptions) - ranks by interest, provides summaries, illustrations
- Took ~20 hours to build using podcast transcripts as training data
- Key rule: No advertising (or user sets price, e.g., “$10 to send me an ad”)
- Will require payment ($100-200/month initially) - may create bifurcation
My (Matt Brooks) Take: I definitely agree with the idea of personalized agents filtering the world for you. I started building this myself. I built Matt GPT based on all of the conversations I’ve had that I could scrape and tried to feed it all of my preferences and say this is what I like, this is what I believe, this is what I want to do, this is what I want the world to be like.
And I definitely had the idea of having Matt GPT read my Twitter or read Substack or read the news or read local event listings and pull out and say, “Hey, these are the top things that you should be looking at. Don’t subject yourself to the slop or the addictive feed, the algorithm trying to entrap you in the feed. Have your AI read the feed for you and just give you the spark notes.”
I think it’s going to be harder than he thinks and it might cost, like he said, $100-$200 a month or more, and I think very few people would pay for that. So I think it would be a really hard cold start problem. In the future, as models get cheaper, it will definitely become cheaper. Like, people would probably pay $20 a month, but I don’t know. In two years, when it’s $20 a month cheaper, I think the world of agents and news and social media is going to look so differently that I’m not sure this method would be the right one. Or maybe this method is going to be provided as a service directly from OpenAI or something. They’ve already started that with their Pulse feature, so it might just be built into the model systems themselves.
Section 12: Rapid Fire Predictions
Key Probabilities Jim Gave:
- Flip phone return (2 years): 30%
- Game B proto-community of 1,000 people by 2030: 80%
- Game B proto-community of 1 million by 2040: 30%
- Living in Game B community for 1 year by 2035: 50%
- Current AI models are conscious: 0%
- Functionalist theory of consciousness is accurate: 80%
- Hard problem is false: 80%
- AGI by 2030: 40%
- AGI by 2040: 98%
My (Matt Brooks) Take: Yeah, the rapid fire probabilities say something about his worldview. He thinks Game B is important, which I do conceptually. I disagree with some of it. I agree with a lot of it. But I just think AI is the defining thing over the next three years. And in three years, I don’t think Game B can scale to the size that would matter. And you know what’s funny is that he thinks that too. It’s really weird. He said the probability that a Game B community can grow to a size of a million people by 2040 is 30%, but AGI by 2040 is 98%. So he’s admitting that AGI is going to be here before there’s even a million people in a Game B community. I think that’s probably true, and so then it’s like, well then all that matters is shaping the path to AGI before AGI and during the transition of AGI and post-AGI. If you admit that it’s only 30% likely you’ll have a million people in Game B by that time—well 30% of a million is 300,000—like yeah, I don’t know. If you have 300,000 supporters of Game B in 15 years, really, is that the leverage that would matter for AGI, or are you just gonna kind of not matter?
I think Game B is going to take too long because it requires too many physical humans to coordinate, and that all of the leverage in the next few years is either:
- Software leverage—leveraging software somehow to coordinate people in software space or something, because coordinating people in physical space just takes too long and it’s too messy
- Or it’s directly pushing on the levers that matter like government policy or the labs themselves or researchers themselves, or maybe shaping public opinion but again probably with software so that public opinion can influence the labs or can influence the policymakers
But I don’t think Game B—in the idea of we’re gonna have membranes and communities and different types of communities doing their own thing and signaling to each other what’s better and stuff—I don’t think that will matter before AGI because it’s too slow. And then after AGI I think we can’t predict what will happen. I like the ideas of different membranes doing their own thing and libertarianism and pro-social, non-status game—all this is great. But I’m not gonna start planning my life post-AGI when pre-AGI just doesn’t matter.
Final Takeaway
Jim’s “Win-Win” Advice: Before liking/retweeting/sharing anything, ask: “Is this good for (1) me, (2) people who receive this signal, and (3) the human race?” If not yes to all three, don’t do it. This is “omni-consideration” in practice.
My (Matt Brooks) Take: I really like his final takeaway. I try to only engage with things I like and are good for me and good for society. “Focus on the things you want more of” etc.