Jim Rutt on the Win-Win podcast

Date: October 2025 Type: Podcast Transcript: View full transcript


Summary of Jim Rutt Podcast Transcript

Section 1: Technology Addiction & Digital Hygiene (00:00-09:00)

Jim’s Points:

My (Matt Brooks) Take: I definitely believe that technology can be very addicting, and digital hygiene is key. I think too many people are wasting too much time that is not benefiting them on reels and TikTok, and Sora is obviously the next step where it’s not even human-generated, it’s AI-generated. I don’t think the idea of a 6-month social media sabbatical or a flip phone is scalable at all. I’m not sure what the answer is, but you’re going against a lot of people with vested interests to get people’s attention on these feeds, and there won’t be an easy solution. But I agree it’s a problem, and I’ve tried to deal with it with different custom blockers and having my phone automatically turn to black and white after 6:00 p.m. and things like this, but obviously that’s not scalable for other people. So I think this is a big open problem.

Section 2: Slot Machines & Psychological Manipulation (09:00-14:00)

Jim’s Points:

Section 3: Career Awakening to Game A Problems (14:00-20:00)

Jim’s Assertions:

Section 4: Manufactured Need vs. Real Innovation (20:00-26:00)

Jim’s Views:

My (Matt Brooks) Take: I mostly agree with this section. My only take is that advertisement and marketing is not always bad. It’s hard to discern the good from the bad, and most of it might be bad. But if I somehow see some advertising that shows me a product or service that I end up buying and I like, and I get positive ROI from what I put into it, then that advertising or marketing is fine. It’s like a signal that a good company has honest and fair advertising, and they’re selling a product or service that you actually benefit from. If you think you’re getting more value than the amount of money you had to put in, then that’s a great signal to receive. I think it’s just somehow really hard to serve really great marketing signals, and obviously a lot of companies will want to manufacture demand and also have bullshit marketing, but not 100%.

Section 5: Status Games & Market Dynamics (26:00-36:00)

Jim’s Analysis:

Section 6: Game B Definition & Core Concepts (36:00-48:00)

Jim’s Framework:

My (Matt Brooks) Take: I totally understand the concept of the bluefish coin and multi-dimensional tokens, but if they actually believe that you need real cryptocurrencies for every limited resource, I think that’s going to be really hard to have people use. Obviously you would need AI agents; everyone would have to have a personalized AI agent that somehow managed your wallet because you’re not going to manage 1,000 different currencies yourself. It would take too much time and effort, and it just wouldn’t be worth it. But crypto uptake among real people for daily things is basically still zero percent, and it will probably be very few people over the next three to five years. So I don’t think this method is really scalable, or it’s not going to work. I totally understand the point and I like the idea of thinking of different market mechanisms and different ways of providing signal and sharing resources and limiting the effects that we have on at-risk ecosystems. But if the solution is actually crypto coins, I think there’s kind of no way.

Section 7: Why Game A is Self-Terminating (48:00-59:00)

Jim’s Arguments:

Section 8: Planetary Boundaries & Degrowth (59:00-1:35:00)

Jim’s Position:

My (Matt Brooks) Take: I’m not convinced that Game A is self-terminating. I understand the point of exponential growth and finite boundaries, but I think with AI, it’s actually possible to somehow generate more power and electricity than people want to consume on a yearly increased basis. I actually do think it’s possible that everyone in the world can have the same level of current creature comforts as Americans and still be fine. And I think even more than that. So I’m definitely against this 4,000-watt society thing. I think people won’t go for it. I don’t think it’s the right thing to shoot for.

I understand the difference of degrowth meaning have less things vs. degrowth meaning value more intricate, meaningful, important things that last. I just think that it’s not a memetically positive story—no one ever wants to have less when they’ve had more. I understand that I am definitely not someone that wants fast fashion and buys status symbols and wastes money. I spend very little money and I totally understand the point of more meaning might mean less things or less things that are better or more personal things. I totally understand on a personal level and I totally subscribe to it. I just don’t think that you’re going to get there on a societal level in the next five years.

And I think with AI and energy advancements we should instead be pushing to actually—if we just put solar everywhere we can and we put fission everywhere we can and we work on fusion—it’s not a problem about how much energy we create. It’s obviously how much CO2 we put in the atmosphere. It’s global warming, and it’s about how many resources we have to consume in a bad way to create the energy. I just don’t think it’s hopeless to do that. And I would guess that the median Game B supporter lives an 11,000-watt lifestyle and would find it very hard to go down to a 4,000-watt lifestyle.

Section 9: Cities & The Civium Concept (1:20:00-1:24:00)

Jim’s Hypothesis:

My (Matt Brooks) Take: I understand the points that cities have positive scaling laws across the good and the bad. I just think that we could, in theory, with technology limit the bad. But regardless, I don’t think that cities are only there because people have to move there for the opportunities. I think some people, like Liv points out, when they’re young, just enjoy living in cities. It’s more chaotic in a good way, more energetic, more exposure to ideas. And sure, a lot of that can go in AR and VR and online (now online you could be exposed to more communities and ideas), but there’s something about being there in person—the somatically energetic environment, the bustling energy, the city that never sleeps.

I definitely don’t believe that most people don’t wish they were in a city and they wish they could get the same job remote in a small town. Like no, I think a lot of people actually just like living in a city. So I don’t expect that cities will cool down when Game B starts to take off or something. And I think it’s just his personal bias, like he said that when he goes into a city he feels very off or something, or he gets the heebie-jeebies. Yeah, I mean some people just hate cities, and that’s fine. I just think you’re different than the people that love cities.

Section 10: Machine Consciousness (1:54:00-2:17:00)

Jim’s Views:

Section 11: Information Agents & The Trillion Dollar Opportunity (1:46:00-1:54:00)

Jim’s Proposal:

My (Matt Brooks) Take: I definitely agree with the idea of personalized agents filtering the world for you. I started building this myself. I built Matt GPT based on all of the conversations I’ve had that I could scrape and tried to feed it all of my preferences and say this is what I like, this is what I believe, this is what I want to do, this is what I want the world to be like.

And I definitely had the idea of having Matt GPT read my Twitter or read Substack or read the news or read local event listings and pull out and say, “Hey, these are the top things that you should be looking at. Don’t subject yourself to the slop or the addictive feed, the algorithm trying to entrap you in the feed. Have your AI read the feed for you and just give you the spark notes.”

I think it’s going to be harder than he thinks and it might cost, like he said, $100-$200 a month or more, and I think very few people would pay for that. So I think it would be a really hard cold start problem. In the future, as models get cheaper, it will definitely become cheaper. Like, people would probably pay $20 a month, but I don’t know. In two years, when it’s $20 a month cheaper, I think the world of agents and news and social media is going to look so differently that I’m not sure this method would be the right one. Or maybe this method is going to be provided as a service directly from OpenAI or something. They’ve already started that with their Pulse feature, so it might just be built into the model systems themselves.

Section 12: Rapid Fire Predictions

Key Probabilities Jim Gave:

My (Matt Brooks) Take: Yeah, the rapid fire probabilities say something about his worldview. He thinks Game B is important, which I do conceptually. I disagree with some of it. I agree with a lot of it. But I just think AI is the defining thing over the next three years. And in three years, I don’t think Game B can scale to the size that would matter. And you know what’s funny is that he thinks that too. It’s really weird. He said the probability that a Game B community can grow to a size of a million people by 2040 is 30%, but AGI by 2040 is 98%. So he’s admitting that AGI is going to be here before there’s even a million people in a Game B community. I think that’s probably true, and so then it’s like, well then all that matters is shaping the path to AGI before AGI and during the transition of AGI and post-AGI. If you admit that it’s only 30% likely you’ll have a million people in Game B by that time—well 30% of a million is 300,000—like yeah, I don’t know. If you have 300,000 supporters of Game B in 15 years, really, is that the leverage that would matter for AGI, or are you just gonna kind of not matter?

I think Game B is going to take too long because it requires too many physical humans to coordinate, and that all of the leverage in the next few years is either:

But I don’t think Game B—in the idea of we’re gonna have membranes and communities and different types of communities doing their own thing and signaling to each other what’s better and stuff—I don’t think that will matter before AGI because it’s too slow. And then after AGI I think we can’t predict what will happen. I like the ideas of different membranes doing their own thing and libertarianism and pro-social, non-status game—all this is great. But I’m not gonna start planning my life post-AGI when pre-AGI just doesn’t matter.

Final Takeaway

Jim’s “Win-Win” Advice: Before liking/retweeting/sharing anything, ask: “Is this good for (1) me, (2) people who receive this signal, and (3) the human race?” If not yes to all three, don’t do it. This is “omni-consideration” in practice.

My (Matt Brooks) Take: I really like his final takeaway. I try to only engage with things I like and are good for me and good for society. “Focus on the things you want more of” etc.