Jim Rutt on the Win-Win podcast transript

Back to Summary

Jim: [00:00:00] Playing an infinite game in a finite planet, is that itself fundamentally a problem? It didn’t appear to be a problem when it started the fact that we,
that the market was now programming the consumers fundamentally changed the nature of the game. Uh,
when I’m growing wheat and you’re growing wine, we both want bread and wine. Trade between us is utterly natural and benign and makes us both better
off. But if I can beam messages into your brain saying that you need pink
sunglasses and sparkly flip flops, suddenly we have a totally different kind of market. Liv: Hello everyone and welcome to the Win-Win Podcast. As you know, this podcast is all about how do we change the game, and I’m so stoked,
therefore, to be speaking to today’s guest, Jim Rutt. Jim is a longtime technologist. He’s actually one of the early pioneers of the internet.
He’s also been the chairman of the Santa Fe Institute, which is kind of the the number one place on earth. To study complexity and systems design even [00:01:00] cooler. He’s
one of the co-founders of something called the Game B movement, which is trying to explore the very difficult question of how do we level up our civilization away from these old school zero sum. Status game type dynamics that are kind of unsustainable into a new strictly win-win paradigm. And this movement has actually been very influential on my own
thinking and has partially inspired this podcast in the first place. So I’m so, so excited to share this conversation with you all because you know,
Jim is one of the wisest, funniest, and most interesting people I have ever met. So on that note, here is my conversation with Jim Rutt.
So Jim, to get us started, you have, or at least last I checked, you had become my hero
because you are one of the only people I know on earth who has managed to transition from using a smartphone to a flip phone, which is something I [00:02:00] have aspired
to do as someone with a very incredibly deep smartphone addiction that makes me miserable. So that was in 2019, you wrote that you were transitioning to one Jim: right? Liv: Are you still a flip phone person? Jim: I know you’re gonna disrespect me. It lasted six months. Liv: No! Jim: And I can tell you what the downfall was in that article I did say I was gonna
re put my smartphone on the shelf for traveling. ‘Cause I couldn’t live without Uber. Right? And so we went on a trip and I picked up the smartphone and that was that Liv: Okay, so what went wrong? Jim: God damn things are so addictive, so useful and so handy. And the you know,
so all the things I wrote in that essay are indeed true, and they sucked me back in. And I will say, I, I do occasionally say I should do it again, and I could do it. I got pretty strong will. I could at least do it for six months. But I haven’t. Right? Just like,
you know, I’m not gonna drink anymore. I’ll never drink again. Right. You know how many times people have said that, right? Igor: Mm-hmm. Jim: So, yep. I hate to disappoint you, but it lasted [00:03:00] six months. Igor: Have you tried the kind of gray scale on the phone version Jim: or the, or the brick or something? Igor: The brick, exactly. Jim: No, I have not. I will say I am quite aware. Of technical addiction and
have what I’d call reasonably good tech hygiene. You know, like, try to avoid bringing your phone into your bedroom,
or if you do put it in the bathroom I have all my devices have notifications
turned off. I preferentially use web desktops rather than apps. Liv: Mm-hmm. Jim: I always say the apps root, the internet, the empire struck back basically. ‘cause apps are not open while the web is. Liv: Have you found any particular techniques that have stuck with you? ‘cause one of the things you
have talked about is that you will take sometimes three or four months off of social media. Jim: Ah, I stick with that one six months a year.
I am off Facebook and Twitter from July 1st to January 2nd. And this most recent turn was six years. I’ve been doing that without fail. And I
did sneak back for a [00:04:00] couple of days around the election and that was it.
And I actually started that routine like 15 years ago. My longest term online community
is a thing called the Well, which is this very interesting old line. I sometimes joke and say it’s the colonial Williamsburg of the internet.
Half the users are still using the command line version. Liv: Oh, damn. Jim: And the web version dates from about 1997, you know, and it’s like a 1997 website
basically. When the community started, 1985 was a spinoff of the Whole Earth Review that, you know, hippie magazine and the whole Earth Catalog and all that. And I’m still a member of that, actually still, but 15 years ago I started taking what I called sabbaticals. Igor: Mm-hmm. Jim: And sometimes it’d be three or four months, sometimes it’d be a year and a half. I just, you know, and when I come back, it’s exactly the same. Igor: So what do you, what do you notice in that time that changes in terms of your day to day? Like, do you notice just do you notice something about your attention span
changing? Do you notice something about the effects of dopamine that Jim: Yeah. I’m, [00:05:00] I also probably have a somewhat unusual usage pattern in that I’m much
more of a writer than I am a reader. I don’t, you know, death scroll on the, on the things. You know, I post something provocative, come back tomorrow and see what somebody said, and then, you know, bang people’s heads together and make ‘em, make ‘em be sensible. Right. Or not. And
so I tend to use it that way more. And my downfall and what free is so freeing is,
and this was true on the, well, even 15 years ago, is if I’m engaged in a serious
confrontation conversation or argument, my background brain is processing constantly. And instead of thinking grand thoughts about how to save the world, I’m thinking about how to make
X look like a fool about topic y. Mm-hmm. Which is a complete waste of time. Right. And so that is the most liberating thing from my sabbaticals. No more background processing on how to win internet. Flame wars. Right. And then of course, just the time I probably, when I’m on, I’m probably spending [00:06:00] 45 minutes a day on average, 30 to 45 minutes a day, and 30 to 45
minutes a day a lot of time. Right. And so I feel great. And then inevitably now since I’ve been
doing this Facebook Twitter sabbatical, by about the 1st of June, I’m going, God, damnit July. When are you gonna come here so I can get off of this stuff? Kind of feel like Frodo halfway to Mordor. Right. Halfway actually in Mordor,
halfway to Mount Doom. Right. The way to this thing. It’s just dragging me down. And of course you could say, well, you can quit anytime you want. And I go, You know, Frodo couldn’t throw the ring away in the end. Right. Liv: You need, yeah. You need a Gollum to bite your finger off. Exactly. So like,
that’s what I’m trying to look for basically with my smartphone. And I mean, most of my smartphone addiction is social media. So it seems like you and I,
I dunno if that’s was also one of your motivations to try and get rid of a smartphone. Jim: It was, it actually, it was triggered as it said in the article, but I got an Apple watch
and [00:07:00] goddamn thing was so annoying. It was beeping all the time. And, you know, stand up. I go, I’m a fat motherfucker. I don’t wanna stand up. Right. I assume Cussing’s okay on your side. Liv: That’s fine. Anything, goes Jim: you know, it’s too much work. And and then, you know, it was like, beep this, do that. Oh, you haven’t done your second circle,
blah, blah, blah. And I took the damn thing and put it in a, put it in my sock drawer and never saw it again. And that got me thinking about how these things were making us dance, Liv: right Jim: and then I went and did a long dive of research, which I document in the article, and I said, whoa these things are not just annoying. They are dangerous and addictive. And for exactly the same reason slot machines are addicting, right? Liv: The and so I the intermittent rewards Jim: yeah, intermittent, classic, intermittent reward, dopamine drip. I mean, it’s pretty much exactly
what the psychology rat torturers learned 30 years ago, right? And interesting side tale,
one of my business buddies left the company we were both at and became the COO of the third biggest slot machine company was [00:08:00] headquartered in Vegas. And when I was in, next time I was in Vegas, he and I went out and you know, had a somewhat liquid dinner and talked about, you know, how are things in the slot machine
business? They said, oh yeah, we’re the third biggest. And we have 200 PhD psychologists,
most of them graduates of the rat torturing, division of psychology. And the big guys have over a thousand. And so those things are literally scientifically
designed, intermittent reinforcement, dopamine drips. And as I referenced in that essay at the
time I wrote it 2019, I did a search of the Facebook job board. For the word psychology
of the jobs listed on the open recs opened on the, on the public Facebook job board. 700 of ‘em had the word psychology in them. Liv: Mm. It’s really crazy. Did you, did you feel any conflict with that friend
knowing that they’ve basically put their one life into creating such an
addictive technology within this case? Slot machines? Jim: Well, I gotta [00:09:00] say, Igor: and on top of it, take 200 psychologists that could
have otherwise did something else within something meaningful wealth of knowledge. Liv: Oh. You know, and done something extractive. Jim: This is the odd thing about my career is I’ve often say on my podcast, yeah. I was certainly a
game a motherfucker, right. And so was he. And we were all just game a motherfuckers. And you played the game as hard as you could and you know, you responded to local signals, right? You climbed the local gradient. You didn’t think globally at all. You know, you played the finite game, not the infinite game. And that’s just what we did. And it was
kind of cool that they had 200 guys and said what a good way to fuck people over and addict them to slot machines. Right. It also explained to me why my mother and all of her sisters. Absolutely love slot machines. You know, these Midwestern gals who are,
you know, quite tough, self-sufficient …people, particularly after their husbands died they love to go out to the casino and play the slot machines. Igor: Explain that to me because like, it’s a phenomenon that’s basically a meme that like we’ve all observed and- Liv: right. We are poker players, so we know Igor: we’ve been to Vegas multiple times. Right. Liv: We spent a lot of time in casinos and I, I have no [00:10:00] interest. I walk past
the slot machines and I’m like, that looks like death to me. Igor: Tell me why, why is it that specifically that kind of profile you described? Jim: Yeah. First let me step back a minute and get, have my own take on slot machines, which is
you know, when I was in business, we’d often go out to Vegas for conferences and stuff like that. And I, my rule of thumb was before I went, I bought a roll of quarters when the roll
of quarters was done. So was I with respect to any gambling in Vegas until later I developed
a moderate poker habit, but I was especially good at it. As long as I did get, didn’t accidentally
sit down at a table with a pro, which I did once, got my ass whipped, but the education was well worth $800 to be applied elsewhere, but against average fish I could make money. And but anyway, I would often describe my visceral psychological reaction to a
slot machine. Be like sticking your face in a disc sander. It’s just gonna gradually run you
down and it’s designed to gradually run you down, you know? And if you played, you know,
having a mathematical bent, you always think of a math series [00:11:00] going to infinity. What happens? The grinder’s ground all the way to your brain, you’re dead, right?
And so had never had any visceral appeal to me, but watching my mother and her sister
and another person who I will not even provide any indirect vector to who another
addict they literally respond to the sounds, the beeps, the payoffs and just like rats in a maze,
just like rats in a maze, there’s a reason the sounds are what they are. Those, in fact, just like a, B, c, D testing of internet ads, the rat torturers back at HQ
are running constant tests on those things. And it’s finely tuned to people who. Are not
able to or, or choose not to- my mother, very smart person actually. But she chose to put
herself in the reference frame of the game to be titillated perfectly by the machines. Igor: But why not? Why not? I always [00:12:00] see the slot machines as shitty video games that cost you much more per unit time than like, why, why not play a better video game that? Jim: Well? ‘cause they’re not, they, the payoff is- Igor: like Candy Crush. Jim: Yeah. No, the payoff is the, is the dopamine reinforcer. Right. So that’s, you gotta have that Liv: and you get like a physical, I mean, you can, you can even have the money come out in the machine. i think that’s the one. Jim: Thats old school, they don’t do that anymore. No. Igor: Yeah. So like a proper, a proper game like that like last of us or just many other
games have just much slower rewards. Well, polytopia is kind of in between, right. Where you each click is actually also designed such that it does give you a bit of reward. Mm-hmm. So I suppose, and then if you go to Candy Crush, you’re already pretty close to slot machine. Jim: Well, I, I, I can address that indirectly. My mother
was my father died. You know, I will say she’s very disciplined. Never spent any significant money. Well I managed her money. She never took a single penny outta capital. So whatever was ex leftover from my father’s pension check sometimes would go to the to
the slot machines. But I didn’t give a shit. So I went, Hey [00:13:00] Ma, how about I buy you two slot machines and put ‘em in your living room? And she’d go,
that would be no fun. ‘Cause there’s no loss possible, right? And I said, how about if I took the money? And he goes, well, that would only be two machines and I
want a variety of machines. And so, the fact that you could, you had skin in the game was essential.
Right? If you, if you know, son James would come and open the slot machine once a year, take all
the quarters out and dump ‘em on our kitchen table, that took all the fun out of it, right? And also the, she wants variety you know, two, two or three or four or five machines in’t enough.
And so, and, but of course, like all these people, they have their favorites. They can sense this is, this machine’s hot right now, right? Which we all know is complete bogus Bs, but but
all the, when you’re in the reference frame that that is designed for, it’s all very real to you. Igor: Hmm. Jim: And she’s, you know, bright enough to understand that I think, although I’ve never had that precise conversation. But, you know, as we know,
people vary in where they are in mental hierarchical complexity. [00:14:00] And if you’re in the lower ranks of hierarchical complexity, those games are real to you. And
those things directly hook into the things you, you were evolved to do, you know, not get eaten by a tiger and to find yourself a rabbit to club, to death, right? Basically. Liv: At what point did you then decide then, okay, that’s not the
type of technology I want to be involved in. I wanna have responsibility in what I build. Jim: It. You know, someone asked me this the other night and I actually have a relatively
crisp answer, and it was when I was absolutely in the belly of the beast. My career was about half entrepreneurial and then half big corporate until 1992.
I did startups after 92. I was a corporate dude until I retired in 2001 in 94. I was
a mid-level executive for what’s now Thomson Reuters. That was the Thompson part. In those
days, there’s $8 billion company, 50,000 employees, pretty big company.[00:15:00] And I was a mid-level executive. I was a chief operating officer for six business units and
you know, anyway, you know, mid-level game a motherfucker, but of the corporate, corporate variety. And I got involved on the Well,
in one of the forums called Sustainability. And we were, and for this first time,
I’d really paid much attention to the bigger question of men and the ecosystem. I mean, yes, I had gone to, you know, earth Day a few times and stuff, and was generally aware questions about the population bomb. Remember I read
that book. Mm-hmm. But I would say it was, you know, not the kind of science I had typically been focused on, but about six months of this conversation with people who were. Absolutely earnest and some of those whole Earth Review people were serious movers and shakers, Kevin Kelly and Paul Hawkins. These people like that were around the well in those days. And it,
I said, God damnit, you know, I didn’t have the words for it quite then, but what I [00:16:00] was
sensing was, wait a minute, playing a game of infinite exponential growth on a finite planet. That’s fucked, that can’t work. And then I said, well, why? And then I
had the big breakthrough also late 1994. All this happened in one year. I had this
just epiphany that the market is one of the greatest inventions that humanity ever made.
As we were talking about at lunch. The market is vastly superior to the force. That was basically what mo what ran the world prior to modernism. You know,
think of it as feudalism or kings or nobility or whatever. Somebody with a knife or a sword or an arrow, or more likely their henchman that’ll beat your ass if you don’t do what
they tell you. The market you know, we all participate in it voluntarily, et cetera. All good. Right? You know, I loves me, my Adam Smith,
my Irving Fisher. It all makes good sense. However in around 1925, a guy named Bernays,
who was Freud’s son-in-law, wrote a book called Propaganda. And [00:17:00] he was the first
person to apply science to persuasion. And that became the basis for the advertising industry. During my research, I’ve always a compulsive researcher jump into rabbit holes and go as far as I can go. I go, this guy Bernays, I forget his first name, who was Freud’s son-in-law,
wrote a book called Propaganda, I think it was in the twenties, where he essentially applied the science of psychology to the art of persuasion. And he essentially invented advertising, public relations, all this stuff. And of course,
they didn’t adopt it instantly, but before long, it diffused throughout the economy. And I realized that that is what had gone wrong. That was the first thing that had gone wrong,
not the first, you know, the fact that you’re playing in an infinite game, in a finite planet is in itself fundamentally a problem. And we’ll get back to why it didn’t appear to be a problem when it started. But the fact that we,
that the market was now programming the consumers [00:18:00] fundamentally changed the nature of
the game. When I’m growing wheat and you’re growing wine, and we both want bread and wine
trade between us is utterly natural and benign and makes us best both better off. But if I can beam messages into your brain saying that you need you know,
pink sunglasses and sparkly flip flops, right? Even though you don’t need. Pink sunglasses,
sparkly flip flops, suddenly we have a totally different kind of market. And starting off at around that time the market went nuts and was able to program its own customers instead of
serving the needs of the customers, legitimate needs of the customers it manufactured needs. Mm. And in fact, you start reading the books about advertising from the fifties, and I’ve read a few, they actually talk about manufacturing need, you know, the,
you know, famous example being deodorant. No one knew that stinking was bad until someone convinced them that it was right. And deodorant’s relatively new phenomena. You know, now we [00:19:00] got vaginal deodorant, you know, asshole deodorant, God knows what, right. And how people, Igor: but how do you, how do you split between kind, manufactured need versus truly like innovative solution? An innovative solution to a problem
that wasn’t before seen as a problem, but can be. Jim: Yeah. And of course, that’s the wonderful thing about technology, right?
You know, probably the greatest one of the greatest even inventions was the wheelbarrow, for instance, right? Mm-hmm. Prior to that the rate at which you could do construction
was gated by how much dirt you could carry on a pole over your shoulder. The wheelbarrow, which I believe was invented by the Chinese, massively increased. The rate
at which you could do construction. So there was a real problem, the amount of weight of dirt
that a person could carry per unit of time. And the wheelbarrow was a huge innovation for that.
You know, it’d say the the railroad, you know, would you rather take a train across
the country or stick, be in a wagon train for three months and, you know, have a chance of being caught in the Donner Pass and have to eat the other members of the wagon train. So those are real solutions and notice that many of [00:20:00] them were prior to 1930, right?
These real solutions. Not to say we don’t also still have real solutions. One of my favorites and
don’t, didn’t need any advertising to sell. This is the electric power drill, right? Mm-hmm. You know, I’m old enough to remember the hand crank drill before
we had what you, we had power drills, but they were expensive and heavy and all that stuff. Just often we use the hand crank one for small jobs. Now, of course, we all got the power drill and you know, the Makita power drill thing of great beauty,
great engineering and it’s great. So I would say many of the things that we have are indeed,
you know, solutions to real world problems. But of course, the other part, and this we’ve always had with us, we can’t blame this one on Bernays. Humans have always played status games. Igor: Mm-hmm. Jim: Right? And take status games plus Bernays now, however,
we can psychologically program each other to play the status games that the money wants us
to play. Right. And when I say the money, I literally mean the money as an egregore
essentially. What, you know, [00:21:00] some people in the liminal world would call mammon. Right. The, you know, the three monsters, moloch, mammon, and we added the third one recently, Hypnos. Liv: Oh, I haven’t heard of hypnosis. Jim: Yeah. This is one that the, my coauthor and I working on our game B book came up with, we actually surveyed some people, we did some research of myth- mythology. Hypnos is the is a
relatively minor Greek God of imparting beliefs into people while they’re asleep essentially. And so what are all the things that ain’t so that we think are right? Mm-hmm. And so Igor: the god of psyops basically. Jim: Yeah. Essentially. And, but the reor style psyops,
basically There isn’t a, a little dude in a room saying, lets fuckeverybody over. Liv: It’s not, it’s not human mediated. It’s almost like it’s, it’s like a,
it’s an emergent property that’s like a psycho fauna that runs on sort of, Jim: and all three of them. The inner loop is the money on money return. Right. The signal that comes
outta the money. On the short term money, on money return, call it three to five years. Essentially is the coordination mechanism, the signaling [00:22:00] modality that all three
of the adverse gods of the modern system respond to and are fed by. Right. If I’m a good spread spreader of status, symbol ideas, right? Like that drinking,
bud Light will get you laid or something, right? Then I could make a lot of money doing that as an
advertising executive or owner of a beer company, or even the owner of a local beer distributor. Igor: I mean, at the same time, in this case, it might even become true, right? If enough people believe that drinking bud light gets you laid and the you see someone drinking Bud Light, you assume that they will be Yeah. That kind of
reinforces it and becomes, it could be become a self-fulfilling prophecy in a way, right? Liv: It could be sexually suggestive to a female. She sees it. Igor: Yeah. But no, no, I agree. I, it, it touches on this thing that Liv and I have once drawn up on a whiteboard where I kind of have a, we,
which probably an existing theory just like rephrased, but on the advertisement point,
I feel like there are basically like the ma- what, what does the market try to do? It’s a, it’s a via, via the mechanism of money. It’s aggregate. [00:23:00] It’s
a very good system to aggregate a lot of information via the method of pricing things, which then allows for this information to like fro flow and without actors having to
collaborate knowingly or anything like that. Right? Jim: That’s the core. That’s the core. Transactional value of money, Igor: right, that’s what it’s, that, that’s, that’s when it functions Well, and this is why it’s so- better than any previous system. It was just better
at aggregating this information about real needs. But I think the question is about like, what is this information that it aggregates, right? And when you add advertisement to it, where beforehand the someone’s desire to purchase
something freely and non coercively means that this fulfills various needs that they have. Once
you add advertisement, then basically their willingness to buy is now not only mediated by need, but also by kind of like, all of these other dynamics that were added to it. And now the market actually starts aggregating across some domains, not what is useful, good, whatever to [00:24:00] the purchaser,
but it also aggregates just what will they buy in the short term quickly, Jim: and which is profitable. This is why I always
say, which is profitable. You have to understand that it’s not just. moloch and mammon and hypnos. It’s also the inner money on money return loop. Mm-hmm. Right? Because
if you can, you know, convince you that standing on your head will get you laid no one’s gonna fund
the propagation of the message standing on your head will get you laid. There’s no money in it. Igor: Right, right. Yeah. There, there are some things which are, these are a good business, these are a bad business,
and that also matters. Jim: So that’s the first cut, you know, so I mean, having been involved in ear early stage companies and financing early stage
companies and all that, you know, it’s the first cut. Okay. It’s a good idea. But who will pay for it? Igor: Right. Jim: And what would it cost to supply? And is it likely to be profitable? at scale.
And if the answer is no, you just don’t even think anymore about it. Right. It’s the first money on money return is the ultimate God. Mm-hmm. It’s the Zeus of the of the gods. Liv: Because it’s a positive, because it’s a positive feedback loop. Jim: Yeah. And it, and it feeds all the others and powers them up. Right. If you could- Igor: well, compound interest [00:25:00] is like one of the biggest, kind of like amazing. Everyone wants to Yeah. Jim: Famously Einstein said, you know, okay, relativity was your the most amazing idea you ever had. What’s the second most amazing idea in the world? And he said, oh, definitely compound interest. Liv: Mm-hmm. Igor: So the, but the thing that an I realize is with the,
if it’s this kind of delta and information that the user has or customer has, is, is key,
then according to that, basically markets should work better in situations where
what the user actually more often buys the thing that is also fulfilling the needs that they have. And I think that’s actually kind of the case. So, and it depends on feedback loops and like,
transparency into what they’re buying. So then the example of, I think the market works amazingly for
TVs for example. Like TVs become cheaper quickly. People get much better. Like the,
just like how much cheaper TVs have become is incredible, right? And the customer can just very clearly see it. Similar with cars also,
like the customer can evaluate things. There are [00:26:00] certain things where the customer has no ability to evaluate it, like diet, for example. Very, very hard. We
have no information and that’s a great spot for advertisement to come in. That would be at least like one theory to have, basically like the more delta there is in the ability
for the customer to evaluate the product, the more space there is for the advertiser. And the less market systems work, do you think? Liv: Well, well it’s, and it’s more specifically, it’s, it’s this decoupling between what the consumer actually wants and what the consumer actually needs. Jim: Lemme speak specifically to your two examples. Igor: Mm-hmm. Jim: ‘Cause actually that’s quite interesting.
‘cause at one level the TVs right now are almost purely on a utility curve, right? Mm-hmm. You know, what, what features do I get at what price and how big? However, when I was a kid, TVs were huge status symbol. Right. They came in these big fancy furniture
consoles and they ba basically sucked and were unreliable. And I never even had a color. We
never even had a color TV in our house till I went off [00:27:00] to college, right? And having a color TV was a status symbol. And having one of these big console things,
we never had one of those either. We just had a TV sat on the table. And so the status game should not be underestimated. And of course there’s always status games with humans. People say,
oh, game B’s trying to get her to status games and go, no, we’re not fools. Humans will always want to distinguish themselves, but not just, but why? Let the money on money
return machine define what the status games are. And so TVs used to be a status game. Not much
anymore. Nobody really gives a fuck what brand of TV you have, right? Even in the double aughts,
if you had a Sony TV that was considered cooler than any other TV and people, oh, you got a Sony. That’s cool, right? Well, now I couldn’t even tell you what even TV I’ve got, you know, nobody cares. Cars though are still quite the status game, right? Mm-hmm. Why in the world
would somebody buy a Range Rover, for instance, right? Or my favorite pet peeve is the G Wagon.
You know, the Mercedes piece of shit that costs $150,000. You know, what they really [00:28:00] ought to do is just get a tattoo on their forehead. I
am a douche bag. And it would have the same effect essentially. And there’s absolutely no rational reason anyone would own either of those two cars. And they’re not better than cars
at half the price at any of the missions that they can fulfill, but they’re a status symbol. And in fact I talked to one of my best friends who’s involved in the Game B movement. I like
to tease him a little bit about this. People say, well, how do you know when we’re in game B? And I say, when you drive up in your Porsche, we’ll all laugh at you. Right? And, you know,
so these, the, the t the the in part at the low end of the car market, I think you’re right. It’s, you know, you, you know, buy a, you know, $16,000 Kia versus a $16,000 Hyundai or a a $17,000 Toyota. It’s pretty much features, price, size, you know,
whatever. But you get much above that, and it’s exapted, which is a biological term for
taking something that existed for other biological reasons, using it for [00:29:00] other purposes. You’ve exapted the status game, and as it turns out, there’s almost no profit in those little
cars, but there’s huge profit in the status cars. They don’t cost that much more to make,
and they sell for like six times as much. You know, you don’t have to be a genius to realize that. Or, you know, Porsche is worth more than Volkswagen. Right. You know, things like that. How could that be? Right? Because it’s so much or profitable. So
they, and only well articulated PR, advertising, et cetera, maintains the fact that a Volkswagen
with a little shield on the front. And a little faster motor, a little better suspension and stuff. But basically the same vehicle you know, sells for two and a half times the price. So I would say TVs have made the transition from being a huge status game. In fact,
just having a TV just before my time was a big status symbol. I was born in 53
and I sure don’t remember the world much before, say 56. And,
but say in the early fifties, just having a TV was a status symbol, and the kind of TV you had
was probably a status [00:30:00] symbol up till the double aughts, but now it isn’t. Igor: Mm-hmm. Liv: I’m trying to think if I was like, I, I don’t know. I’m not a teenager anymore,
but we used to care about who had, sadly we used to care about who had what phone. Igor: Yeah. Yeah. Liv: But I, I would imagine that it’s less about what’s the phones are physically and it’s more about what apps do you have, who’s, you know, this,
it feels like the status games have fully switched into the digital realm. Igor: I would, I would again claim that according to this I would expect it to be
more useful and better for the, that, that the market would provide better phones because the user can just very quickly evaluate the value to that, right. Liv: There’s a type of value- Igor: so like there is it’s very hard to play a strong advertisement status games. When it’s abundant as well. So like probably the It also, yeah, the, the, when they’re rare you can play the game better. Jim: Okay. That’s the false scare- Scarcity game. You know, if I, you know, if I were the dictator of the world for a week, which would be a good idea you know, I would get rid of the whole luxury
goods industry, you know, if there’s anything more revolting than people spending, you know, a thousand dollars on a purse or $250 on a scarf because it’s got a brand [00:31:00] on it or
but it’s interesting about the phones ‘cause you know, you know, there’s, the watches is probably the most disgusting of all these luxury games, you know, the $20,000 watch, you know,
and it’s interesting, I mentioned this before with people, why isn’t there a Rolex of phones? It’s interesting ‘cause you, you can ascertain the value of a watch. Just fine. It’s actually easier to understand than a smartphone. And
yet there’s the status game of watches for a long time on principle, I wore a Timex for $23, you know, had a, the Indigo Timex had a little light on it. You go like that, right? And now I have to watch Wear an Apple watch ‘cause it’s I can monitor heart rate and
stuff like that with, but I certainly would never be sucked into the game of paying $20,000 for a watch. And yet that is a huge industry. And so, I’m not quite sure
how we square that circle that there is no Rolex of phones, which is odd in some ways. But there is the Rolex of, of watches. Igor: I mean, maybe just because it’s- there could be, but watches came before phones when [00:32:00]
the status game around them was just one of the games that you could play, right? Like,
there, there weren’t many options basically for, especially for a man to show kind of anything. It is kind of, restrict because men don’t have handbags. It’s restricted to watches and glasses.
Usually you can have very fancy glasses. You can have very fancy watches and otherwise, Jim: and, and shoes. Igor: Shoes. But like otherwise, clothes, it’s,
men don’t play those games very much. I think like the watch market is very man heavy. Jim: Yeah. There are fancy lady watches too. Igor: There it is but usually they Liv: and girls do handbags. Igor: Yeah, yeah. Handbag hand, and of course the whole clothes thing, right? Jewelry men
don’t do jewelry much. Men, men usually do watch. Well, that’s it’s the jewelry of the, of the men. Liv: Yeah, totally. Igor: So that’s why I wonder, it’s okay, they could have now moved to phones,
but you now you’re starting a whole new thing. Whereas it’s much easier to maintain a- Jim: It may have been a- Igor: preexisting system. Jim: It may have been a founder effect too. You know, people like Jobs despite the fact that he was a, a sociopath was also a pretty strong egalitarian,
right. He thought that what he was doing was bringing power tools to everybody.[00:33:00] And so he may have forbidden you know, playing the status game with apple.
‘cause there was certainly a period when the iPhone was a status symbol itself compared to
what were those little thingies, the blackberries, right? Mm-hmm. That, you know, you look at people with blackberries go, Ew. Right. You, you know, teenage girls would do that. Right. And then there was a bit less of that, but still some, but the an early Androids, which are pretty cruddy actually. Igor: Mm-hmm. Jim: And, and may have been that Jobs forbid, playing the status game with iPhones Igor: also, it’s actually been embraced by I think the, just by, by the people that the idea that
Bill Gates has the same phone as you, it’s a thing that has been mentioned more often. Jim: Yep. Igor: And it’s kind of like celebrated as a good thing about what we have. Jim: But I bet Bill Gates doesn’t wear the same watch that, you know. Igor: Right, right. Jim: The dude in the, in the Walmart’s shopping center Igor: mm-hmm. Jim: Shopping, parking lot is wearing. So it’s just interesting. No obvious reason. And if anything, the [00:34:00] phone is more flashy. Right. You’re pulling it out all the time. Igor: Yeah. Yeah. Jim: Right. If I was going to think, you know, a priori about what would be a wonderful status
symbol, the phone would be it. Mm-hmm. And yet for whatever reason it didn’t happen. Liv: Well, I think it’s partly because the, the,
the status games have switched into how many numbers you have on social media. They’re just, they’ve, they’ve become fully digital. Jim: That we should keep in mind that, you know, while we are all very online, that’s only 1%
of people that give a shit about the number of likes and followers and all that sort of crap. Liv: Oh, I completely disagree. Jim: You think Liv: completely 1%. No way. Jim: Maybe it’s one, maybe it’s 1% of boomers. Liv: Yeah. Maybe, maybe. Of your generation. Yes. Yes. Jim: Because when I talk to my friends, they go, what’s Twitter? You know? Right. Liv: No, the gen, the generational gap where is like in, in gen Igor: Mr. Beast has like, 3% of the world are following Mr. Beast. Exactly. Just one guy. Liv: Gen Z, gen Z, it’s, it is almost entirely about that. Yeah. Like it’s the number one job. Jim: Yeah. There’s job. It’s number two after medical doctor, Liv: really? Jim: Medical doctor is number is back is [00:35:00] one, and influencers number two, I believe. Liv: Ah, I would bet against that. I, all the studies, I’ve, you, Jim: I’m not gonna bet against you. All the studies I’ve seen is that, do I look like a fool? At least Western gen z, Igor: your generation as a child wanted to be number one an astronaut. Jim: Yep. Igor: And nowadays it’s unfortunately very much- Jim: it was astronaut number one, fireman number two, and policeman number three. Igor: It’s also funny because it’s actually the time when you can become an astronaut. It’s actually feasible. We’re getting there. Yeah. Liv: Okay. So- Jim: oh wait, we, we, we, if I’m talking to Liv here, I gotta have a wager, right?
So I’ll, I’ll wager you a dinner that the most recent polling will
show that it is medical doctor or that it is not. Influencer, but I’m, Liv: which, which, which demographic and where, which location? Igor: We’re only looking at Gen Z. Okay. Let’s or Gen Alpha. Jim: Lets say let’s say American 18 year olds, Liv: American 18 year olds. Jim: 16 to 18 year olds. Liv: Number one job. Igor: Okay. Sure. And we’ll take, I take the average of three polls that we see as the top result. Jim: That’ll be fine. Liv: What are we wagering? Jim: [00:36:00] A dinner. Liv: Dinner. Fine, fine. Deal Jim: gambling with Liv. Ain’t going higher than that. Liv: No, I mean, I, I, I’m, I’m not that, I’m, I’m, I’m like 60%. Yeah, no, Igor: that’s, that’s good. It makes it a fair bet. Jim: I’m probably about 50 50, 50 40. It’s reasonable. I don’t have any like big insider knowledge. Igor: Liv and I usually, instead of wagering money, we wager units of personal assistant time against each other. So then you have to do labor for the other, which works, works Liv: mostly on Backgammon, Igor: which works really well. Liv: He owes me about five hours. That’s great. I’ve Igor: put in quite a bit of work for you actually. Jim: So, ripping your assses. Igor: Unfortunately, I’ve put in work willingly
rather than sometimes just every time tracking it and reducing- Jim: Well thats ‘cause you’re not a game a motherfucker. And this is
important. Families have always been game B, right? Yeah. Right. Liv: Well, okay, so before we go any further, can you give your definition of game A versus game B? Jim: Yeah. Say Game A is the status quo. It’s the game that you, when you walk outside your house, you are [00:37:00] playing. It’s the, it’s what I call late stage modernism includes market economies,
particularly highly financialized market economies. Typically democracy, certainly strong technological base and all driven by this inner loop of money on money return that
we talked about with some adverse effects such as multipolar traps which we sometimes call moloch. You know, the being driven by money, including at the personal level, which we call mammon. And,
you know, the belief of things that were manufactured for us to believe, which we call hypnosis, or hypnosis, what the hell I pronounce it. Ah. And game B is a
proposed very rough sketch of an alternative social operating system for the human race. So very, very modest proposal here, right? That would literally rethink how we relate
to each other at every scale and apply some of the same kinds of ways we deal
with each other in the family, in the way we deal with each [00:38:00] other, with our friendships and way humans live for 290,000 years, 290,000 years as foragers. But updated, modernized to take advantage of the most cutting edge technology and actually drive technology further so that we’re not caught in win-lose type situations.
We’re not caught in multipolar traps. Where we’d all like to not do X, but we’re forced
to do X often by one bad actor, sometimes by no bad actors, by random fluctuations. You know, the stupid story I give, and I don’t even know if the underlying premise is true,
but let’s imagine it’s 1975. There’s five soft drink companies, right? And they’re all using cane sugar to sweeten their soft drink. But one of the guys decides, aha,
there’s money to be made by replacing the cane sugar with corn syrup, right? And so he does. And oh, guess what? The consumers can’t tell a difference. Ha ha ha, evil genius,
right? And the other guys go, well, that’s kind of a shitty thing to do, you know, corn syrup’s worse for you. I don’t know if it is or isn’t,
but people think it is. And guess what? Because the other guy’s profit margin goes up, his stock price goes up, his market share starts to [00:39:00] go up. ‘cause he can undercut the other guys in price. Everybody else has to get rid of corn you know, cane sugar and replace it with corn syrup, even though they didn’t want to.
In the game B world, the soda people would have what we call an accord, which we all agree. Quality soda is made with co corn with sugar cane. And yes, innovation can occur, but we’re not gonna adopt innovations until we all agree that they’re good, omni, considerate, and not just for us, and not just for the industry,
but for our customer and for society at large. And so now that’s just a very small,
homey example, but apply that from the bottom up to the whole world. And that is Game B. Other aspect differentiation. As we talked about earlier,
game A is completely dominated by this inner loop of money on money return. And lemme say,
we’re not saying we should get rid of the market, but it’s the big problem with it. It’s a single dimension and it’s only a, a signal on a single dimension. And most of the world’s interesting problems are high dimensional. A nice simple example. Should I,
should I give [00:40:00] it one, ones I like a lot ‘cause it’s highly tangible.
Imagine blue fish, which kind of fish that is common where I live in the ocean and the bays
and stuff. People like it, some people like it, some people don’t, which is important actually. And there’s a biologically sustainable number of blue fish that can be harvested each year and
maintain a healthy population today it has to be enforced. If it’s enforced at all by complicated and onerous government regulations and all this sort of stuff. People cheat and
this kind of thing. In the game B world, we would add a second dimension called the blue fish coin. The people that live in the area where blue fish are harvested would hire a biologist or a couple of biologists and ascertain the sustainable limit. Let’s say it’s
40 million pounds of blue fish per year. Probably using a blockchain. We’d min 40
million blue fish coins and we then distribute ‘em per capita to the inhabitants of that bio region. Right? Turns out a lot of them don’t like blue fish. It’s a strong oily fish. I happen to like it. A lot of people don’t. So there would be automated trading agents, which would then
you just put a check mark on your inventory, said, [00:41:00] no, I don’t want a stinking blue fish. Right? Just liquidate ‘em for maybe money or at least some other intermediate good. And the automated trading agent would basically get rid of the things you don’t want, buy the things you do want, and we then have a closed system, a closure on the system so the only 40
million pounds of blue fish could be transacted. So when I buy a pound of blue fish, not only do I have to bring the dollar price, I also have to bring one pound worth of bluefish coins. And so you mo increase that to a thousand different things. You know, various woods oil
you know, CO2, and you have many dimensions that are controlling the transactions,
not just one. Then you can shape a world that doesn’t just run crazily in destroying the world. Liv: This is a way of like managing. Common public goods essentially, or Jim: that’s one piece of it. Igor: Well, it also sounds like the thing that it relies on very heavily in this world,
in this case is what is the assumption of the experts biologists that you brought
on that [00:42:00] about how many bluefish can be sustainably farmed? Right. And that would then also exist within how much oil can we extract per year, et cetera. Yeah. It’s I mean, those are definitely interesting questions, right? And there
is a question of like how to I imbue this information into just, into the transactions, into the transactions Jim: without regulation, you know, Igor: but, but the, but there is the issue, for example, with something like energy,
it does strongly depend on what can you do with it on the other side, right? Like it’s a, it’s not like it’s a zero one issue where we can extract easily
this many barrels of oil, and then if we do even one barrel more now, the whole thing, we’re on a bad path. So it’s kind of like a gradual trade off, rather. And there needs
to be something on the other side that argues for a little bit more oil will actually do us mighty good because this will ensure national security for whatever reason or something. How, how, how, how does that interact with it? Jim: Yeah, that’s a good question. I call the top level governance issue. Mm-hmm. Not entirely
clear yet. And it is important [00:43:00] to note that game B is a work in progress, and we probably have better thinking about things up to the bioregional level.
Rather than particularly say that’s a very good ex good question. But we do believe it has to be addressed at the global level,
right? Whatever energy policy or a or more specifically using the atmosphere as a dump. How do we, we don’t care about energy, right? Use all the energy you want,
right? Mm-hmm. So long as it isn’t you know, raising the climate polluting the atmosphere. So fusion power, you know, more power to you, right? Fission power, more power to you. Though there will presumably be some form of market driven cost to disposing of
radioactive waste from nuclear power plants. That would be explicit and another token, most likely. Mm-hmm. So that’s, this is kind of what I’d call the transactional monetary
signaling part, but there’s another, even more important part of game B and, and it turns out that it applies to game A as well, and that is any civilization. It has two things going on. Lots more than two things, but two key things. One [00:44:00] is what are the capacity of the people, the citizens,
and the other are what are the institutions at any given time? And then this is more important.
What are the trajectories of the capacities of the humans and the nature of the institutions? And our analysis says that for a while, maybe since Bernays, but certainly since 1975
the institutions in game a have actually been degrading the quality of the humans. Right.
Think about our politics. I point this out regularly. In 1964, first time I paid
the interest in politics the less than 50% of Americans above 25 were high school graduates. 9% had four year college degrees, and yet our politics was pretty sane. Got things done,
put men on the moon. You know, we did some stupid shit like Vietnam War. Oh, well, we recovered from that pretty quickly. Built the interstate highway system.
You know, the, our governance actually work pretty well with people without much education. Today, [00:45:00] 92% have high school degrees, 36% have four year college degrees. And we have
an insane shit show for politics, right? So we have actually degraded the capacity of our humans
for, to do collective sense making because our institutions have also devolved. And further
it’s a spiral. As the human capacity goes down, the quality of the institutions that you can
build those down, and those institutions will probably make the people worse. And so you get a downward spiral, and we’d say late stage game,
a at least since 75 and probably since 25, has been on a downward spiral of institutions and
human capacity. So the core way that call the designed value for designing
the game A pieces is we have to reverse that spiral. We have to increase human capacity. And this is something I’ve learned by dealing with the hippy dippy community. You know, the fair amount of people in and around [00:46:00] game B, hippie, dippy, meditators,
all that shit. Right? I don’t do that stuff, you know, I’m a good old redneck, basically, you know, and a retired game, a motherfucker, so you don’t how to do that stuff. But anyway, a lot of ‘em do. And but I, I see them go out for a 42 day retreat or something,
right? And they come back and they’re all very serene and nice and all’s well, and then six months later they’re still doing, they’re doing all the bad shit again. And,
and I actually watch that and thought about it a fair bit. It makes perfect sense ‘cause you come back into institutions that are not scaffolding
the change. That’s the words we now use. And if we had a world where your institution scaffolded
the good change that you made to yourself. Then they’re more likely to stick. And oh, by the way, if you have a community of people who’ve all undergone some
sensible, well thought out, not too woo woo transformation and have more human capacity,
essentially we can now build new institutions for ourselves that will scaffold us and even elevate
us to the next level, or provide a reasonable path to the next level [00:47:00] that’s worth doing. ‘cause it’ll be scaffolded. So this is utterly core to the idea of game B. So now you say,
wait a minute, isn’t that a crazy dream? How the hell, hell are you gonna have that happen on a worldwide scale tomorrow afternoon? The answer is you can’t. And so game B is also realized,
and I will say this from the beginning, that it has to start tiny and local. And language we now use is membranes. We build membranes in a membranes biological
term. One of the uses things like the cell walls, a classic example, and the key thing about a membranes that’s semi-permeable. Some things come through,
some things go out, some things are ejected. You know, the cell ejects the toxins, right? The cell could eject the sociopaths, you know, for instance. It makes some things internally. It brings things in from the outside. And so we think of a game B.
The actual game of game B once it starts being played is being played initially as membranes that are curated by people who wanna live a certain way, who scaffold themselves,
who choose the [00:48:00] kind of personal change they want to make with each other, non coercively and build the institutions and the personal change up and up and up. And then the membranes communicate with each to each other with protocols. And some membranes are, are contained within other membranes. In fact, eventually all membranes are contained within
the master membrane. And this is a way that game B could actually be played at relatively strong
strong sauce as our friend Jordan Hall says, and yet be embedded in the belly of the beast. Igor: I think before we justify the specific structure of Game B, like much more, let’s
bring it back a bit to Game a and because you, you are the chairman of the Santa Fe Institute,
so you know, your, you, you, you were, yeah. So you know your complex system science,
correct. And I think it’s also the complex adaptive system lens that is like used quite a lot in these evaluations of the different systems. Right? And I think it’s a great lens that it’s worth discussing, even though one comment is, I think the game a and b moniker, I’m not a fan of the term,
and in part also [00:49:00] because it, it, it, it invokes kind of the as if a was the first
game that is being played. It’s actually not right. Like the beforehand there was. Proto a and proto, proto a, et cetera. And actually game A was
already an improvement on proto a in part. So we are actually in a chain of like continuing improving and the civilization, finding new games Jim: and forking all kinds of things. Igor: Yeah. And forking. Yeah. And we had a different game and then French Revolution came, they wanted a different game, et cetera. Right. And like you’re kind of suggesting let’s have another, because maybe this current version
is coming to a point where it needs changing or else it will potentially self-destruct. So I want
to talk more to the point of or here you rather talk to the point of why is it that not just
that can be found like just a few points of change within game a such that it does not self-destruct. Jim: There are some people in [00:50:00] around or who have previously been part
of the game a world who do think that the top down method of a few changes will save the day. Igor: Mm-hmm. Jim: I don’t believe it. I believe that the game A is a highly evolved beast that
runs on the money on Money Loop, and now has been ramified via advanced finance. And you know, an example, the debt crisis may be one of the ways that that crashes the system.
There’s just so many ways that the system is wrongly constructed and self-reinforcing,
you know, consider climate, right? We all know that’s one of the boundary conditions where,
where it is, is arguable some of the climate activists are over claiming the levels of risk. But it’s definitely real. As a science person, I can show you the physical chemistry that will show you that you add certain amount of CO2, the atmosphere,
the retained heat will continue to go up. Now how much damage that will do, that’s also arguable. But it is certainly a dangerous thing to [00:51:00] do, to do. Essentially we just elected somebody who believes that climate change is a hoax, right? Our, Igor: he was a little bit amenable actually on a, in one conversation by hearing that if you increase the CO2 in a room to a certain level Jim: plants will grow better. Igor: Well, but also, no, Liv: no cognitive abilities go down. Igor: That cognitive abilities go down, you get a headache. It’s unpleasant that he didn’t like, Jim: okay, that he didn’t like. So there, but here we are collective sense making in the. Most powerful country in the history of the world that has the best universities in the
world that invented the internet, that has written most of the software of the world and our collective sense making elected someone who declares climate change a hoax. Well, I mean, how, how are you gonna change that system? Igor: Well, I mean, one part is and we can touch on that later around liquid or representative
democracy is, but it’s a problem of representative democracy, right? Where you just buy a bunch of, you buy a basket of policy positions versus another basket of policy positions. Yeah. And other aspects of the basket of Trump were just really favorable to people Liv: [00:52:00] compared to the alternatives. Igor: Compared to the alternatives for some people. Jim: For some people, Igor: well, for the majority of people actually daily in this case, well, Jim: not quite a, just less than a majority, 49.9%, something like that. Liv: But of people who turned out to vote. Igor: Well, in any case, I agree that that position I’m also not, but so, but that would be actually a great example
where okay, here would be the counter by a try let’s keep game A around for this is can’t we
add something like a carbon tax or can we not just find out more information about how we can. Like how much CO2 we can raise where we are, innovate on more,
what, allow more fission, innovate on fusion. All of those things, you know, Jim: all those things are, are worth doing and I advocate for them within the context of game A,
you know? Mm-hmm. I take off my game B hat fairly often and go play game A politics. Right? In 2016, I actually worked for the Bernie campaign, for instance, right? And I believed in
the Bernie campaign. It wasn’t perfect, but it would have been a substantial step of amelioration
on game a [00:53:00] unfortunately, by 2020, Bernie had gone nuts. So I did not work for the Bernie campaign in 2020. And but with my let’s, let’s talk about game, both hats. Let’s talk the game a hat first. There is a proposal that’s been floating around since
2015 around climate around carbon tax. It was signed by 300 academic economists right
and left that this is the right way to do it, which is the following, which is a carbon tax,
which starts at a moderate level, and it’s stipulated to increase. At a uniform rate for 20 years, and then stay at that rate until
we stabilize carbon and then it’ll go down. And this is the key part. The carbon tax will be a hundred percent refunded per capita to the citizens. Igor: Right. Jim: So it’s not giving yet more money to the government to do God knows what. Right. Igor: So it’s not even to develop direct air capture or anything like that, it’s just straight up. Jim: No, I would, I would add to it that that, that you should, and
I [00:54:00] think actually they did have in the fine print of their system that if you can do capture, you will, you’ll be paid, right? Igor: Yeah. Okay. Jim: You’ll be paid the inverse of the tax, right? Yeah. So it sets very interesting, very simple and very playable rules for a game.
So obvious that this is within 10% of the right answer that I’ve thought long and hard about,
couldn’t come up with a better one within the game a. Context, not a single political entity in the world has even considered this. Right. Liv: So why is that? Jim: Because the power of the business sectors that do not want to have to pay the tax is so
strong that it corrupts the politics of every country on earth and not just the us which is
relatively easy to corrupt politicians in the US Europe’s a little harder, but not that hard. And so even Europe with their cap and trades all bogus, you know, hasn’t worked at all. Any, anybody could have predicted it. This extraordinarily simple that uses the power of the market to send signals that are stable and predictable over a long period of time,
which would. [00:55:00] I’m quite confident, turn the curve on the
climate change problem in time would come close to the optimal minimization of total cost. Doesn’t even get a hearing on any major country on earth. Mm-hmm. If we, if something as simple as that, as easy as that to explain, which, oh by the way,
we’ll give you a dividend check every month. And for most people, the dividend check will be bigger than the tax that was imputed in their purchases. Why? Because consumption of energy is highly asymmetric. Right. Rich fucks consume many times the energy of the, of the median person,
which means probably 70% of people will get a net economic benefit from this thing. Igor: But it even improves the genie coefficient example. Jim: Exactly. Igor: Exactly. And so, so that’s why you’re going on about money, money, money on money returns,
because in this case, the companies actually have, I mean, as Daniel Schmachtenberger put it there,
the CEO and the board, they’re obliged, they act as obligate sociopaths, right. They, by law, they have to by law Jim: in the United States Igor: [00:56:00] fiduciary duty, optimize for the short term, relatively short term profit. There you can have like a public benefit corporation, but very few are, Jim: And you can actually modify your bylaws. I have pointed this out when the B Corp came around said, you don’t actually have to do that. You couldn’t modify your bylaws, but nobody does. Right? Igor: Yeah, nobody does. And the investors will go to the one which doesn’t infer,
and this is the multipolar trap. Liv: Yes. Jim: One of the, we haven’t talked about it as much as we should actually, but the,
capital investment structure of Moloch is also hugely powerful, right? ‘cause everything is financed by pension funds and big mutual funds, et cetera. And they are second order obligate sociopaths,
right? Mm-hmm. And it’s way easier to measure their returns ‘cause you can look at their financial returns to the hundredth of a percent, right? Mm-hmm. And people will move their money. Particularly, you know, big pension funds will move from one money manager to the other for relatively small amounts. So that game is even more intensely
Moloch than the corporate world is ‘cause it’s utterly fungible. Igor: So then the question on that would be,
could, [00:57:00] wouldn’t it still be a game a game to say hey we now require,
I suppose that’s what some of these campaigns try to do when they do what- what was it the big reporting where you had to do sustainability? Jim: Oh, erg. ESG Igor: ESG is obviously bullshit. It’s just a bunch of forms that you have to fill out and the better you fill out forms, the all games, the more you win. Jim: Enriches consultants. Igor: ESG was a terrible game obviously, but what is the very good version of it? Or maybe, I suppose we’re maybe just coming to game B, but what I’m saying is if I added
another system of care, basically. Outside of just a direct money incentive, there is also a
sustainability, not sustainability, but like long-term viability incentive or something. Jim: Yeah. How do you destructure that? You gotta be, so you’ll need additional signals. Hence the blue fish coin. Hence the oak coin. Liv: that’s, that’s the problem. You basically, anything where you are trying to compress all this information down into singular metrics, which is what ESG did, it’s like a different score. It’s insufficient. So you need Jim: high dimensionality. Liv: Yes, exactly. Which is what you are basically advocating for. Everything [00:58:00] has a coin, which is where actually crypto is so useful, but every, every common. Resource every common good,
et cetera, where, you know, it’s, it’s hard to exclude people from, but it’s, it’s, it’s, it’s, it’s a scarce resource. You need some kind of tokenization to measure
it and allocate it, but people can, you still maintain market dynamics by allowing for trading? Jim: Yeah. And sort of, markets are very important. One of the beauties of markets
is they’re free, right? I don’t like blue fish. I don’t have to eat blue fish. Right? If you live in the Soviet Union, everybody got their chunk of blue fish eat it or,
or starve right in. In game B, we would never do that. We would just say that, that the people who
are responsible for managing this bio system, which is the people who live along the literal, have decided this is what that this area can tolerate in terms of use of bluefish. And they can, and it will diffuse out to the whole world by using markets, right? Markets are one of humanity’s greatest inventions that we certainly
don’t wanna throw ‘em away. Lemme get also back to as you were going down the list Oh yeah. Can make Rutt dictator for a week. Maybe we could [00:59:00] fix game a right. Igor: Your words’ not mine, but I’m down. Jim: Oh, yeah, yeah, yeah, yeah, yeah. That’s good. Great idea. But let’s go to
the one that actually all the way run the clock. Back to 1994, when I first awoke,
right? My first proposal was to eliminate all advertising from all electronic media,
tv, radio. This was the very early days of the internet, et cetera. So I think I actually. Argued for linear media, like TV and radio to limit advertising
to two minutes an hour, and to ban it entirely from random access media like
the internet. Because that gets back to this bernays problem, the epoch of Bernays where we can manufacture the status games that people feel obligate to play. In fact, let’s connect two things here. This is always fun when we can do this. Slot machines and advertising right? Folks at high enough level of hierarchical complexity can choose not to play the advertising driven game, right? I can easily afford to buy
all [01:00:00] the Land rovers I want and I drive a beat up 8-year-old Jeep just cause, right? ‘Cause I don’t give a fuck. In the same way that none of us are gonna become
victims of slot machine addiction, I think at a very high level of probability the same. So hence the same argument why slot machines are probably too dangerous for most people
says that advertising is too dangerous for most people. ‘cause it forms money on money return
only driven status games that inevitably lead to increased consumption and pressure on the world,
in addition to sucking brain power out of us that could be used for something else. Instead
of working the extra hours to make the payments on your Land Rover. You know, maybe you do a painting
or maybe you spend times with your grandkids, et cetera. It’s sort of an evil attractor and,
and the only reason it exists is ‘cause it can be engineered to be profitable in monetary terms. So again, until to get rid of advertising, you’re still in game A. Igor: Mm-hmm. Right. But yet we are still we still allow slot [01:01:00] machines. We’re not,
we don’t want to ban them. Right. Or, or would you, I I imagine game B doesn’t suggest banning such a thing. It’s just like, it’s still very it’s a very voluntary- Jim: that’s an in interesting, actually, game B is curious ‘cause it doesn’t specify things like that. But it allows the banning and allows the non banning through a concept and different membranes. Yeah. We, different membranes, we call it
coherent pluralism. Everyone agrees to six or seven top level accords. And then as you
go down the stack of embedded membranes we also use the idea of subsidiarity. So you push decisions down as low in the stack as you possibly can. So let’s take the example of advertising. A a membrane may ban advertising,
may ban advertising supported media from coming through its walls. I dunno how it would do that, but let’s say it could you know, another one could say, oh yeah, well we don’t mind advertising. We’re gonna educate our people with ninja mind tricks to be able to ignore advertising,
which you may be able to do. And that’s one of the things that we, I think is central to game B is that it’s a high dimensional [01:02:00] search for
human wellbeing rather than a dictot for human wellbeing. So let some membranes allow slot machines and others to ban them,
and then they’ll communicate with each other on a horizontal information bus. And over time, some consensus will arise on whether everybody should ban ‘em or nobody should ban ‘em,
or some people shouldn’t. Some people shouldn’t. Liv: Mm-hmm. So what, what, what would be the methods of,
because I mean, it sounds like it really rests upon these, these nested structures that have almost perfect information sharing with one another. Jim: Well, it don’t have to be perfect, just has to have some Liv: Okay. Sufficient matter Jim: more the better, more the better. High quality. Liv: Right. Jim: Good faith, non-corrupt. Liv: Have you done any practical experiments whereby these types of
structures have successfully, you know, where you’ve had, have you tried living in a membrane or have you, do you know anyone that has Jim: I do know some people that have, and most of them have failed, and pretty sure we know why is that? They didn’t [01:03:00] realize they were living in a membrane. Think of your classic hippie communes, right? Mm-hmm. Or the eco village today. Right. What they haven’t done is
stopped before they started and said, wait a minute. I’m in a membrane. There’s, it’s semi-permeable. Stuff has to go in has to go out, but also stuff has to come in. And so what’s the, what’s the economics of my membrane? What, what engine is the equivalent of the mitochondria in my membrane so that my brain works,
the word I use is membranes have to work and holding hands and saying Kumbaya is not enough to make the membrane work. So what what does your internal economy look like? IE what can you provide to each other? You know? Okay, we’ll apply babysitting to each other and maybe homeschooling and maybe we’ll grow some percentage of our food,
but we still need to buy a lot of stuff too. What could we do for game A that will make us, allow us to make enough game a tokens to buy the things we need from outside to bring through the membrane? And I can tell you that most of the folks that have set up these experiments in the past did not think in that [01:04:00] mercantilist way. And once we
lay this all out, I would hope that anyone who labels themselves game B before they launch will actually give serious consideration the same way you would in a startup, right. To make sure that it works. Liv: Look, look into failure modes. Jim: Yeah. And, and specifically think through what is the fuel that
your thing needs from game A and what are you gonna do for game A to earn that fuel? Liv: Right. ‘cause that’s my next question is like, you’re technically trying to outcompete Jim: Yes. Liv: Game a while at the same time. There is space for competition within game B. Yes. Is that correct? Jim: Right. But though it has to be mediated and.
Via accord so that you don’t fall into the Moloch traps, Liv: right? And you have to be consciously aware of externalities at all times, Jim: up and down the stack horizontally with your peers that you’re in competition with. So you’re competing fairly up the stack so you’re not caught in traps by your
investors. And then below the impact you have on people. So what Daniel Schmachtenberger would call omni consideration. Mm-hmm. Which is [01:05:00] a core game, B term. Liv: Right? You want, you want all the, the, the impact on all, all involve stakeholders to be essentially measured and tracked. Jim: This, this is where I have grown. He says, not just my waistline which is I if you had asked
me in 1994, yeah, we can measure it all. No, you can’t. At the end of the day, we have to rely
on our human intuitions and our collective sense making abilities, which are not formally logical. And we have to, in good faith, communicate with each other. However, we choose to communicate through a governance mechanism of some sort at every scale,
and use our human ability to feel our way towards solutions that are gonna be imperfect and partial
but are not corrupted by the things we know we don’t want to be corrupted by. And the other part about it, and which again differs like game Game B differs a lot from many other of these kinds of revolutionary ideas. Is we don’t think
that the [01:06:00] first set of answers are the right answers. You know, it’s, it’s a process. It’s an on, it’s an ongoing search for what works
and it’s a sorting out of what works and what doesn’t work and what sort of works. And, Igor: and what, what, what I like about that is that it sounds like you’re not presupposing that there is a certain set of rules that would ever apply continuously, right? Like the,
the many of the past kind of philosophies, like even if you take the three big ones of
utilitarianism, deontology and virtue ethics, like they all kind of try to have a certain
set of optimizations that they have and then a certain set of rules underneath. But actually if you take each of them and become an ardent follower of exactly that,
you always get in each of them to very absurd outcomes and cases. Jim: Great Example Liv: never go fully utilitarian. Igor: That’s why when people make the claim that, oh, then the ends fully justified means you can do this. It’s like, yes. That’s why you never go full and just justify the means Jim: well actually end up as an EA person or something. Right? Igor: Well, even worse you end up as SBF actually, right? [01:07:00] But the same,
same applies to Deontology. It’s like if you go full deontologician, then you end up letting the ax murderer and you ware and Frank is in your house, et cetera, right? Like you just do these type of actions. Anyway, my point is, I think that I liked is that you emphasize that you kind of have to trust human intuition a
bit and you have to improve your intuition by just, probably learning a bit more, et cetera. Jim: Oh, and I think this is where- Igor: understanding that there is no fixed set of rules that can sufficiently describe the complexity of reality. Jim: And in fact, rules themselves are ephemeral. Even the top level rules will be evolved by some process not yet defined. And
I think this is one another very important game B thing is any set of accords are only
the current snapshot of a dynamic process that will be undergoing some trajectory. Mm-hmm. And that trajectory will be we both endogenous and exogenous. Endogenous, meaning the people inside the membrane are learning. Right? We had the rule no gum,
no, no bubble gum. ‘cause we didn’t want bubblegum on the streets. [01:08:00] And we go well I think we’ve, we’re now mature enough that we we’re smart enough to put the gum in the trash, right? So let’s change the rule. And so we should always be thinking about any set of accords
as a snapshot on a trajectory of the growth of this institutional personal spiral upward. And
that is also hugely important when we think about how this is gonna be done. Liv: So maybe one rule you do have is that there are no fixed rules. Jim: Okay. Mr. Goyle, that’s the only rule. Igor: So it sounds like I mean it obviously does it, it’s it’s a very decentralized sensemaking.
It also has a lot of network thinking. So, and, and there are two movements in this
direction currently, right? Like, there are a lot of people pro decentralization. It’s the- it’s the crypto people, it’s the e/acc actually as well. And then
in the network state, they’re like the Balaji, et cetera. how have, how have all of them, I imagine you’ve had some conversations with some of these people. Jim: Yeah. Quite hilariously. Game B is actually listed as a network state example. Igor: I would Yeah, that makes sense. Jim: And they have asked us to [01:09:00] be, you know, part of their meetings and stuff, and we’ve, we’ve just said we’re not opposed to what you’re doing, but,
and what we’re doing is sort of compatible, but it’s not quite the same thing. Igor: Yeah. Jim: You know, they’re, you know, so, fiercely,
financially libertarian that they don’t understand, at least I don’t think they understand the high dimensionality of the nature of the problem. Liv: Right. Jim: And they’re a very simplistic solution. But some of the, some of their ideas are, are good,
but one of their ideas is fundamentally bad, which is that you can do all this on the network and,
and at least for a long time, not have control of the ground. There has to be a base level for, to get the highest fidelity form of living. Because while we all live in the networks, to some degree, most people get most of what they get outta the world from their actual neighbors on
the ground, their community, their caregivers, taking care of their parents, taking care of their neighbors, et [01:10:00] cetera. And I think that
these very left brain, to use the cliche, but it’s, let’s use it in the intelligent. Ian McGilchrist fashion. Very left. You should have him on your show,
by the way. I know him pretty well. I can hook you up a little. Liv: Yes please Jim: the left brain folks think they could live in a network world. Maybe they could
if they’re asked be enough. Right. But for Normies the ground is important and we don’t,
not every game be a membrane will have ground initially. Probably most won’t. Liv: What do you mean by ground? Jim: Earth Liv: okay Jim: so in other words, a community on the ground with a fence around it, that’s the membrane Liv: I see the actual physical in space. Jim: Physical location. Liv: okay. Understood. Igor: Touching the grass. Jim: Yeah. Touching the grass or touching the asphalt even. Right? Mm-hmm. Because it could be in town and it, and it could be let’s say for instance 10 people move to the same neighborhood in you know, south Austin. And even though
their properties don’t touch each of them is part of the same membrane, right. And declare this to be a game B membrane. When you walk into their house, you play by game B rules. And when you’re outside the house, you’re back to game a [01:11:00] motherfucker rules. Liv: Mm-hmm. Igor: So it sounds like, probably the thing I would imagine where they have a disagreement
with game B. Yeah. There is significant overlap, similar with the e/acc, but I think both of those, Jim: what is e/acc? Igor: So, oh, e/acc is effective accelerationism. Jim: Oh, those fuckers. Igor: Yeah. Yeah, exactly. No, no, no. But actually they have,
they have some right ideas, by the way. I am I, I’m, I think that on net they’re unfortunately making everything worse. Exactly. But, but I think that they
have touched on and emphasized certain points that. Others haven’t sufficiently emphasized. So for example, that safety is being used as a veneer for control, et cetera, Jim: which is true. Igor: It’s true. Right? Or that just beware of over centralization. It’s,
it’s also true, that’s also true. They’re much more in favor of decentralization than many other communities that have tried all of these government
backed solutions while the government is still as faulty as it is currently. So, and I, and I genuinely appreciate their work. Jim: I agree with those things. Liv: They, they just don’t realize, or they, they insufficiently don’t take into account that they’re by just blindly saying,
just go faster. You are also accelerating the [01:12:00] centralization problem. Igor: So that’s, and exactly. Jim: And the Chinese love that Igor: right? Liv: I know that’s thrilled with what e/acc are doing. And they’re like, yep, great. Igor: If we, again, take the complex adaptive system lens
the point that the both of these communities make is kind of just, Hey, capitalism is great,
libertarianism is great. Just trust it. It’s gonna be great. You can’t predict the future anyway. Especially the founder of e/acc likes to say that in quantum mechanics there is this Lyapunov exponent, which describes like your kind of
like fuzz or uncertainty in the system. And as the system progresses over time, Liv: it gets bigger. Igor: then it becomes in a chaotic system,
it becomes bigger and bigger and you can predict less and less. Which is only part of the story because the other part of the story is that after even more time,
some things become easier to predict. Actually, I don’t know whether the stock market is gonna go up or down tomorrow,
but I can bet with 90% certainty that in 10 years it’s gonna be higher than it is today. Jim: That’s, I call this the ensemble
of trajectories, [01:13:00] right? This is a complexity science idea. Igor: Yeah. It’s, there are attractor states. Jim: Yeah. You can say that this, there’s an ensemble of trajectories that we will
probably be in, and the stock market’s a perfect example that on average 10 years from now, the, the trajectory we actually walk, which I cannot predict mm-hmm. Will be one that ends up at you know, a positive return of above 7% or whatever the hell it is.
But you can’t predict what trajectory it’ll be. Right. And this is another game B concept we call
epistemic humility, right? Mm-hmm. That, you know, applying formal complexity theory and diffusion,
et cetera we can’t see that far ahead, but what we can do is get better tools to see further ahead. You know, for instance, agent based modeling is probably better than traditional differential equation based economics for trying to understand these
trajectories. But it’s not gonna be perfect for, you know, the very simple reason of
deterministic chaos. Right? It was shown with the Lorenz equations that even a very simple
set of equations, initial conditions can produce [01:14:00] utterly you know. Igor: Chaotic systems, Jim: chaotic systems that go out very far, you can’t predict. So, so it’s a combination of honoring epistemic humility, but at the same time trying to build
binoculars. That’ll let us see another 25 feet ahead on the road. Igor: I’d love to hear you also talk about why you think that game A is
self terminating, but specifically from a complex adaptive system lens. Jim: Yeah, I would say a complex adaptive system lens, broadly construed to also
include broader system science. ‘Cause we often talk about trajectories and
predictability or lack thereof, epistemic, humility, et cetera. But what people forget is that real world systems also have boundary conditions, right? For instance if you make the assumption that we can grow infinitely, then exponential growth is
driven by late stage financialized capitalism could work, right? But the reality is we have a finite planet with finite capa carrying capacity. Lemme give a historical example. I,
I arbitrarily defined that [01:15:00] game A started on February 24th, 1694,
right? Just for fun the day the Bank of England was founded. And it was one of the
three pieces of game A, which is. Financialized capitalism. We always have had capitalism,
financialized. Capitalism started with the Bank of Amsterdam, bank of England democracy, which started right about the same time in England with the glorious revolution of 1688. And then the scientific revolutions of the 17th century, you know, Newton and Boyle and the, and those guys and the equivalents on the continent. So,
roughly speaking we can I say other people say it started 5,000 years ago, but for whatever, it doesn’t really matter. But let’s say game A started in 1700. The population of the earth of in 1700 was only 650 million people, about a 12th of
what it is today. Each of those was consuming about a 10th per capita of the resources of
the world and two times of wood [01:16:00] and energy and metals and things of that sort. So
you multiply a 10th times a 12th, and when the human race as a whole was 120th, less than 1% of
the impact on the planet that we as humanity are today, it was not worth worrying about. At that time, it seemed like nature was way bigger than humanity and just going
for it was a good thing. ‘cause in 1700 most people lived in huts with dirt floors. 50%
of kids were dead by the time they were five. We didn’t know dick about medicine. You know,
that was pretty horrible unless you’re without rare, rare nobility. And so game A was a wonderful thing. It those three things that came together, democracy,
technology, and finance, caused an explosion that we’re still in today, but it’s an exponential
explosion. And in fact, the exponential component is cooked in by the financial part. The way that finance works with respect to bonds, it’s easier to understand with respect to bonds. And also the money supply is you’re [01:17:00] stuck on an exponential growth curve period, or you’ll die. It’s like a shark. If you, if you stop swimming, you’ll sink and die. So,
the three together are hooked by the money on money return machine into a perpetual exponential growth. And so think about a world that’s big. But we are, humans are small in 1700 and we’ve been growing exponentially now for 300 and
some 325 years, we are close to the wall. In fact some people think we’re past the wall,
the nine planetary boundaries, supposedly were over four or five of them and approaching the rest of them. You know, of course people choose those things. You know, they choose ‘em for sometimes political reasons to say, right. But whatever, it’s, we’re getting closer, Liv: right, right. Jim: Closer to the wall. And if we were to,
it’s just very simple. You don’t even need any fancy network theory. When an exponential is reaching its boundary condition, it’s going to crash period. Right. You know what happens in overpopulation you know, the famous hairs and lynxes things where the,
the rabbits go crazy and then the lynxes go crazy and then they both [01:18:00] die off. Right? Liv: Malthusian yeah. Jim: It’s, it’s, it’s not exactly Malthusian, but it’s, Malthusian is one, one variant of it. So it’s just relatively obvious that when you start small and you
grow exponentially for 325 years, you eventually fill up the space. And so that alone does it. Of course, there’s many other self terminating ways. You know,
for instance you know, the invention of advanced and low cost bio technologies,
right? You know, how, how far are we away from a misanthropic incel in their basement using home CRISPR to splice some Ebola into the common cold? Right? You know, that’s, that could be a failure mode. Liv: Okay, but. I think the reasonable pushback that most people have to this,
this statement of, okay, we’re approaching boundary conditions, is yes, but we’ve always innovated and expanded the boundary. And then so why is this different? Jim: And we can’t, because at this point, the one boundary that we’ve never been able to expand is earth itself, right? And yes, we [01:19:00] could go to Mars, but that’s an idiot idea. We should go to the asteroids, actually Elon Asteroids first, not Mars, but the amount of humans and the amount of
our work that’s gonna be put into space anytime soon is absolutely de minimus. Now, over tens of
thousands of years, hundreds of thousands of years could be very significant, but not in time to save us from crushing ourselves as we hit the wall with respect to the carrying capacity of the earth. Now, Trump- Liv: but I mean, what specific carrying capacity, like you’re saying, is it, is it just like arable land, the ability to- Jim: top soil, arable land water’s - water, we can actually solve by brute force using
desalinization. That one’s solvable. Turns out, Liv: can’t, can’t we innovate our way through, like reinvigorating the soils? Jim: We haven’t found a way to do that yet. Igor: Vertical regenerative farming. Jim: Yeah. Charles Eisenstein has a very interesting way of putting this, which is
even to the degree we can, do we want to? He say, oh yeah. We could probably have a population of
three times the one we have today, but it would be a world of concrete and shit [01:20:00] Right. Where it would be living. I Igor: kind a little push back on that because so there, there’s who is it? Matthew Glaciers wrote this book, 1 billion Americans. Yes, I read the book. Yeah. And, and he points out it,
if the US went up to a billion, it would be the population density of France. Jim: Exactly. Us can do it. But we’re not talking about the US talking about the world. Igor: But we also have a lot of lower density than the US places. Right, right. It, it,
it wouldn’t be, I mean, what things would still concentrate? You would still have
Shanghai, Hong Kong, Tokyo, Mexico City, et cetera. Maybe not just become massive. Jim: Maybe not. Maybe they maybe not. Igor: Or you would have more of them. Jim: Game B would say probably they gradually evaporate ‘cause they have outlived their useful
purpose. Mm-hmm. This, this is now, this is a good complexity science term. The work by
Jeffrey West and Luis Bettencourt and some others. Santa Fe Institute, university of Chicago. I think Luis is at these days have discovered that animals have scaling
laws. You know, that they get bigger, but their metabolism slows down relatively otherwise they would explode basically. [01:21:00] But cities have the opposite. As they get bigger,
they scale positively. The per capita income goes up, basically. But crime goes up, patents go up. Musical compositions go up at approximately 15%
for each doubling. So if you take a a city at a million and you it grows to 2 million,
not only will it, will it do twice as many patents, it’ll do 20, 20 plus, 15% more
patents. So it’s an increasing returns scale, but it applies to both the good and the bad. You know, mental health, disease, crime, et cetera. And this is where technology
actually is important. Jordan Hall has come up with this concept. He calls the civium, which I think is encapsulates this in an easily understood fashion, which is by the good use of
networks and network protocols, we ought to be able to capture the positive returns of scale
by a global set of fractal networks that can work together at any scale. ‘cause by the way, game B does not want to give up on going to the [01:22:00] stars.
We don’t wanna give up on being able to build things like the Gravity Observatory, which took 30 years across the whole world, basically to build. But we don’t need to live in
cities to do it. You know, cities are, as I say, they have a scale, a scale negative scaling law. Every time I come into a big city, it gives me the heebie-jeebies, you know, you know, Austin’s sort of barely tolerable. Anything bigger than Austin is like, Jesus, get me the hell outta here.
And as people become more self-aware. And our higher end capacity it’s our conjecture at least
that if they could, if there were a way to make a good living people would prefer not to live in big
cities, and especially if they could capture those innovation and economic scaling laws. So I, my my hypothesis would be that big cities gradually evaporate over some period of time. And there are some people who just like big cities, and- Liv: I mean, I think it’s a function of age. I liked a big city when I was under the age of 25. I loved it. I was hungry for it.
I grew up in the country and I was in terminally bored all the time. And I was just [01:23:00] looking for, because you are trying to expand your
network and learn about the world, and you just wanna maximize the number of interactions you have. And so big cities are naturally attractive. Jim: And it’s thought that that is actually what drives this increasing return to scale, is it’s that interactions are cheaper in a big city. Liv: Right. Right, Igor: right. And that’s also in part why cities that have actually survived, or rather the big cities have survived longer than the big empires. Right. In in many cases. Yeah. Jim: Much longer, much longer. Rome is, dam Damascus is still around, you know, it’s gone through you know, 8,000 years of history. You know, many, many, many empires are coming and went. Damascus is still there. Mm-hmm. Liv: Now, I mean, as, as populations age, I could
see that being another forcing function of wires, like big cities will cool off. Jim: I, I suspect that we’ll see at least
one hypo- one game B hypothesis. You’re absolutely right about the life cycle. Young people need to go out and explore, right? And so we would expect that people in
early and frankly, I hope they start doing it like age 16. This idea that you’re 20, you have to be
22 or 23 to be an adults kind of nutty, right? You should actually get out in the row when you’re 15 or [01:24:00] 16 mm and travel from one community to another and communities will be clustered. So there’ll be numerous of them close together. There may be units that are maybe the equivalent of a small city that are within a few miles of each other. And go out
and try various communities and see what you like and interact with various people. Track down the thought figures that you want to interact with them and their communities. So I do believe there, there’s still gonna be a need for young people to mix and to interact and to get exposed to new things. But do we need the big cities to do that? I don’t know. Time will tell. And again, game B doesn’t say we need to abolish the big cities. You know, we’re not Pol Pot, gonna march everybody off into the countryside. But it’s our hypothesis that as the, if that the, if the, the only way to get
increasing returns is no long, or the way to get increasing returns is no longer the city, the non-city way of getting increasing returns to scale will on net be more
attractive over time. So it’ll be movement of people from the city to the non-city. Liv: What about again, coming back to planetary boundaries. [01:25:00] Populations are already
declining in many nations, most, most western nations actually. Jim: Yep, yep. Liv: And it’s projections are always updating, but it seems like we’re gonna, as a global population, taper off around 11 billion. Jim: 10 or 11. Liv: 10 or 11. Yeah. It was 11. It’s already probably been down weighted to, to 10 or so
within the next five, 10 years. And then things are gonna start declining. Jim: The peak will be at the end of the century, approximately. Igor: It, yeah. It’s a little later when you reach 10 billion, but in any case you do okay. Liv: Yeah, no, you’re right. I’m not, my numbers wrong. Igor: It’s irrelevant to the point. Yeah. Liv: But the, but the point is like we are, we are reaching a, a global maximum likely of population and then things are gonna decline anyway. So
again, isn’t that going to take pressure off planetary systems? Jim: That takes pressure, but where the real pressure is coming from,
and keep in mind, we won’t reach the peak till around 2100, right? Mm-hmm. So there’s a lot of momentum built into population systems. What’s actually crunching the world right
now? ‘cause population growth has slowed considerably. Is the rising [01:26:00] GDP
per capita, right? If everybody on earth had an American level standard of living, Liv: It would be in the shit. Jim: We would be deeply, deeply in the shit. And how can you morally say that those people shouldn’t have the same Liv: Right, of course. Jim: Standard of living that we have and they all aspire to. And it would, and you know, one
of the things I use very, as a very rough. Rule of thumb is what is the energetic intensity of a
civilization and what just sim- calculate simply as a continuous watt power of a, of a society. How many watts are you consuming 24 hours a day? It turns out for the us, Australia,
and Canada who have the big countries of the three big offenders, it’s about 11,000 watts continuous. So you have 110, a hundred watt bulbs burning all the time, including everything you do,
including all the products you consume, Sub-Sahara Africa, less than 500 Nepal,
a little bit more, 600 India, about a thousand China, about 4,000. And what
I find very interesting is Portugal which is a pretty good [01:27:00] place to live about 4,000.
So it is probably possible that even before we can get rid of fossil fuels entirely
we could steer our societies to something like a 4,000 Watts society and we could bring the people that are below 4,000 watts up and we can allow the people that are above 4,000 watts
to slowly over two or three generations settle back down to around 4,000 watts. And we can all have approximately the same material existence. Now they’ll
be qualitatively very different. People will choose to spend their, their energy differently, and probably we can get to zero carbon by 2070 or so and
provide 4,000 watts continuous to everybody on earth. And do it just barely in time. And in a way that’s that has a very substantial amount of, of justice built into it. Liv: But again, this is assumptions based upon sort of current energy production paradigms,
including presumably, you know, the, the [01:28:00] continuous decoupling between carbon and energy usage, which is progressively happening. Jim: And we’re assuming growth curve look, so called learning curves on all the technologies as well. Not that the Liv: Oh, I see. So that’s baked in, like innovation and- Jim: that’s a key part. That’s a key part of the Santa Fe Institute work on alternative energy is the, the, the team there that did that work has been following things like heat engines
you know, the steam engine, gas engines, gas turbines, jet engines, rockets. And they have a predictable learning curve to them over 250 years, which is amazing.
Now it’s a series of s curves that one starts and the next one starts. Same’s been true for electrical generation, same’s been true for petro petroleum
refining. And they’ve created these curves for the alternative energy things as well. And so I come up with the back of the envelope, 4,000 number, assuming the
reasonable set of innovations and learning curves. But, and I also always acknowledge this
no fundamental breakthrough, right? Like for instance, Liv: ai like the, the, the something created by or Jim: fusion [01:29:00] cold desktop Cold fusion or something. Right. Zero point energy. Liv: Right? Right. Jim: But, but but probably. Fusion could have some impact at the margin within our planning
horizon. But there could be a technological breakthrough that could make that happen earlier,
but probably isn’t gonna be significant enough to save the day by sort of 2060,
2070, where things start to get pretty steep. Igor: Can scaling solar or Fision get us there as well. Like if we removed regulation and just- Jim: and we should do all those things. Yeah. And they will help. But just talk to Nate Hagens, Igor: right? I know he has a- Jim: he and his folks have done, they come up with 2000 watts societies. The best we can do. Igor: It’s just when, when, when I, when most people hear 4,000 Watt Society or 2000 watt society, it sounds very unexciting to be in a, kind of like a level wattage use among all people. Liv: It sounds dystopian to most people. It sounds kind of dystopian. It sounds degrowthy and it’s, [01:30:00] it’s odd. Jim: This is hugely important. Igor: It sounds, it sounds very kind of like, I know that it’s not, but it sounds very equity and degrowth right. Coded. Jim: Now let, I hate the word degrowth, right? ‘cause it confuses two different things. Igor: Mm-hmm. Jim: Right? Look at an iPhone ain’t much in it except sand and a little bit of
metal. And yet there’s a huge amount of captured intellectual property in it. So that is a good kind of growth from the perspective of the planet.
You’re not doing much harm to the planet when you buy an iPhone. You’re doing a little bit, but not if you went and spent a a thousand dollars on.
Aluminum or, you know, you can buy a lot of aluminum for a thousand dollars by Liv: coal, Jim: two by fours, coal, you know, corn, right? Corn’s renewable. So, copper, copper bars,
right? You’re doing way more harm to the environment than a thousand dollars for an iPhone. And so we have to distinguish between enrichening things
versus just more bulk stuff. Simple example I give is clothes, you know, this current fast fashion fad [01:31:00] of, you know, we’ll wear clothes for six weeks,
throw ‘em out, order some more on Temu Oh, if we go right. Definitely not good for the environment. What, how about instead we bought clothes that were long lasting and well made, and let’s say in just a particular game, B membrane we one of the crafts that is practiced
there and is a compensated role is we embroider each other’s clothes to tell our lives stories,
right? And so your clothes get to be embroidered by the embroidery people. And each week, each month, a little bit more of your life story is embossed in
your clothing. But your clothing is designed to last for 20 years, right? Or 10 years. And so instead of having cycling through new clothes every six weeks,
you basically cycle through your clothes every five or 10 years. You don’t have as many, but they’re much, much better made, and they tell your own story in an
artful form as an example of clothing growth in the microsphere, which is this beautiful
embroidery and the telling of your life story. But degrowth, degrow- not growing [01:32:00] in the
macrosphere of throwing shit away constantly and just churning for churning sakes, Igor: right so you, you’re substituting materialism for meaning quality, Jim: quality and meaning, and intricacy. And actually Feynman, Richard Feynman said this
long time ago. He was, he such a brilliant guy. He foresaw, even though he didn’t know the mechanisms for nanotechnology, he said there’s a lot of room at the bottom. There’s a lot of ways to do chips. Computer chips, of course, is a perfect example of that. You know,
50 billion transistor gates. And that’s basically a piece of sand about this big. And,
and yet a huge amount of stack of value chain goes into creating that thing. And that’s the kind of good growth as opposed to gross growth. Liv: So it’s, it’s growth into the microscopic, essentially into or into, into, into smaller scales. Jim: And in, and in reality, Igor: you’re getting more efficiency out of like a small volume Liv: of a smaller amount of material, material, basically. Jim: essentially. It’s freezing information and knowledge, right? Liv: Mm-hmm. Right Jim: because the chip has to be designed. But [01:33:00] the beauty of information
Kevin Stewart Brand made the famous statement, information wants to be free.
Where was it, Kevin Kelly? It was one of the two of ‘em. It doesn’t really matter. They’re both they were both closely related to each other in their business careers. The, the, the reason that information is so cool and things like computer chips are so cool is it
costs hundreds of millions or billions to design a state-of-the-art chip costs. Way less to stamp ‘em
out, right? And so the marginal cost is way lower. And so if we start to think about a non-rival risk
economics where if instead we, where people can have things that more or less their marginal cost,
plus enough dis addition to incentivize people to invent more rather than playing rent seeking games
then everybody could be way richer in captured intricacy, shall we say, or art et cetera. And we think this is a way that this is, there’s no reason to be dystopian. I would also add the following more and more. More is just tiring,
right? You know, [01:34:00] you get up in the morning, I wanna get a promotion at work, so I can buy a bigger boat with a 200 horsepower motors 125 mast pole mounter. Why? No good reason other than that you’re trapped in a Bernays epoch status game loop. Why should you
be investing your life energy and doing that? Instead, you can play the game with a different set of rules. The rules is what kind of cool life can I have for 4,000 watts? Right. Any game can be
played and there are plenty of moves in the 4,000 watt world that where you could have a much better
world or a much worse world, depending on the discernment you have on how to play that game. Liv: It just, I, I agree. It just seems, knowing all the people I know, even even,
you know, the more like game B kind of favorable type thinkers Jim: Right. Liv: Is that we have become so accustomed to just more, more, more filling the, the, the holes in our souls with, with, Jim: with shit, Liv: with more Yeah. Entertainment or experiences, et cetera. Jim: Yeah. Liv: It doesn’t have to [01:35:00] be material,
but it’s just experiences. Yeah. Which are typically, you know, still high consumption. Jim: Yeah. Flying around in a jet, going the bank. Liv: Exactly. They have a lot of externalities , et cetera. Yeah. I don’t know how, or see how,
particularly as we are trapped in the sort of status games of social media, which is all about showing off what you did today. You know, Instagram, it’s like, look where I’m visiting, you know, the, the whatever new cool thing I have, like how it needs you. Yes. You have the, you can redesign
systems and like incentives, but it feels like you also need this like internal shift. Jim: Yeah. That’s the personal change part. Right. Liv: Right. Jim: You need to have gone somehow through some change. Or just be a contrary fuck like me to say I’m beyond all that. I no longer
need to do that. And then, but the key part is I now have to have to be scaffolded in a micro
society that rewards that, that says that, that, hey hey Liv, that’s a great thing that,
you know, you have nine dresses and they’re [01:36:00] beautifully embroidered, right? And that you got the, the coolest bike anybody’s ever seen, right? That was
that’s been tuned and and adjusted until it’s to a fairly well until this is the greatest bicycle
in the valley. You know, rather than all the macro status games that we are played. And you’re,
that’s my point earlier that if you go, go through some modest personal change and then you’re thrown back into the rat race, the rat race will capture you 98 times out of a hundred. So you gotta do both. You gotta have the micro society and the personal change, and both are
indispensable. You take a rando off the street, pick a stockbroker and toss ‘em into a game, B
Village ain’t gonna work. No. Right. They’re gonna go nuts or, or we’ll have to kill ‘em, right? This is the you know, this is the Chris Boem, the book, one of the most important books ever written. People go read it. It’s now free. It’s a PDF Liv: what’s it called? Jim: It’s called Hierarchy in the Forest, and it’s the Forger operating system that
defeated the big man. Chris goes through, he was a Santa Fe [01:37:00] affiliated anthropologist,
and he goes through the historical, the, the biological ancestry of humanity. And he, and it says, based on that, you should predict that humans would be very hierarchical
and dictatorial, right? Like apes are basically like chimps in particular, and bonobos in a different way, but they still play that same kind of very top down
status game. Humans figured out a way to diffuse that game, and he even gives a hypothesis on what
allowed them, what was the technological affordance that allowed them to do that. And they, he claims they, for, for most of our society times as humans. The Forger operating
system was highly tuned to suppress big men, people who claim general authority. And
and I stop here, we’ll sidebar this turns out. Game B had come up with a similar idea before we
knew about Chris, which we talk about role-based leadership versus position based leadership. Right. And in Chris’s book, you know, he’ll say, you know, the young [01:38:00] guy who really knows how to capture rabbits leads the rabbit, hut,
you know, the old tracker, you know, knows how to lead the, the group onto the next camp spot
without getting ambushed. The, you know, the granny knows how to find the tubers. And those are the leaders for each activity. And when somebody else becomes better at that,
they just naturally acced to it. Well, position based leadership is, I’m the chief, I’m the boss. You do what I tell, Liv: I can tell you everything. Yeah. Jim: Yeah. It’s general. I’m, I, I’m in the box, therefore I have all the powers of the box, even if I’m not competent. And so, hierarchy forest lays out that humans developed distinct distaste for big men. They
honored their autonomy and their capacity to be, you know, their own selves very, very highly. And
they had a hierarchy of ways of, or. A process of dealing with big men first they ignored ‘em. You
know, let’s say you’re the big man, and they could be a woman, probably not very often. 99% of the time they’re probably men. But they oh, well, everyone,
you [01:39:00] just ignore ‘em. Go, let ‘em run their mouth, and eventually they’ll get the hint. Nobody gives a shit. But let’s say they’re big and tough, they go around to grab people
and you know, it’s like, I want you to do this. Then you just, then you basically laugh at ‘em. There are a few things that are more subversive to people trying to establish puffed up power than laughing at them. Right? And then if they get, you know,
more obnoxious than that round at a couple of their buddies or something, try to, you know, more or less suppress us, then you exile ‘em, and if they come back, you kill ‘em. And that’s just the way it is. And I, I do think that- Igor: and which society slipped like that, or Jim: according to Boem at it was an overgeneralization as
Graber. And who’s the other guy? They wrote that very cool book right before raber died. It’s a, Liv: doesn’t matter, Jim: Winslow, I’m talking, look it up. But anyway, the, his claim is that all forager people basically lived this way for 290
thousand years. Verse 290,000 years of human existence. Liv: So basically pre the agricultural revolution. Jim: Well, [01:40:00] even the Liv: average hunter gatherer society would, would, that’s how they would keep power maximizers in check. Jim: Yes. Liv: Hmm. Be like the alpha male, the classic alpha male that Jim: Yeah. The big man, right? Yeah. And when it’s probably a genetic tendency
to produce a percent or two of big men in any society, right? And we just gotta say, okay, if you, that kind of personality type might be great to be a marine or something. Liv: Mm. Jim: A corporal, no higher rank than a corporal, you should not have power over other people. But that kind of forceful personality can be good to go kill the enemy when necessary. But never,
never, never give big men control of the levers. Unfortunately, our system is designed to call to the big men. Here are the levers. Big men come and get them Liv: right. Igor: And part of it is efficient. I, I’m, I’m somewhat sym sympathetic to the hierarchy in
the forest idea described on the previous point though, where like, what does the,
it it requires personal change as well to move there. And we talked about that.
Someone goes on a 40, 42 day retreat, you said, and then they come back and then they [01:41:00] get thrown back into the scaffolding and it changes. But at the same time, frankly, I like if the world consisted of 70% or whatever percent of people who
are, of the type, how the ones that just came back from a 42 day retreat. It sounds pretty boring to me. Like it’s, it’s, it’s, it, it doesn’t sound very appealing. Like it’s, it sounds like a lot
of kumbaya and woodworking and doesn’t have to be like, kind of like an Amish style society. Like the, the, it’s where everybody’s equal. And it doesn’t sound like
Game b also emphasizes anti-fragility, right. If you have anti-fragility, then like, oh, you can actually hit it and try to break it. Explode stuff. Jim: And lots of variety, right? Yeah. Igor: Let something happen. Jim: Coherent pluralism is, you know, very cool. I mean, there could be people who spend all their time playing warrior games, right? And, you know, the pickups, you know, kendo or something, you know,
it doesn’t, I I personally have no interest at all in saying, kumbaya sitting around the campfire all the time. But I would like to do it every Friday night. You know, Jamie Wheel I actually asked him to design a- Igor: it shouldn’t be the only [01:42:00] activity. Jim: No, no. By any means. Right. And, you know, those kind of people drive me nuts. These,
you know, ones wanna to meditate all the time and all that horse shit. You know, I meditated for 20 minutes a day for 40 days before I did my John Verve podcast. And I said, this is kind of boring as shit, you know? And it just doesn’t, didn’t do anything for
me. But gimme a nice single ball that much better. But but some, for some people, that’s the right
thing to do. And again coherent pluralism. There wanna be whole communities of kumbaya people. Let’s see how they make it in the world. Right? And there are other people who are
constantly thinking about what cool thing can we do as humans with the affordances we have, right? How do we get to the stars? You know, how do we create the ASI,
how do we figure out how to increase human longevity to a thousand years? There will still be lots and lots of people that wanna do those things,
and game B is not at all opposed to them doing it so long as you do it in an omni, considerate way, and with the constraints of our planetary limits. Liv: Well, I mean, when you, you mentioned ASI like that might be one that is very, very hard to get consensus on [01:43:00] and whether that
can be constrained because by definition you are building something that cannot be Jim: not by definition, but it is a strong, it’s strong inference, Liv: by extrapolation, yes.. Jim: That may well be the case that we can’t con control it. Igor: Hope it’s not by definition, unconstrainable. Liv: No, you’re right not by definition. Jim: Some people think it is. I mean, Yudkowsky- Liv: to me, it feels like it is. I have to say Jim: Eliezer Yudkowsky believes that by definition it’s uncontrollable. I argued with him about that. Mm-hmm. And I do think it is potentially controllable. We
could fuck up. There’s a lot of ways we could fuck up ASI nanotech biotech. We may have already fucked up with the forever chemicals. Right. Liv: Right. Jim: And then the one that we haven’t talked about yet, but it’s the one that’s nearest to the current event horizon. Is our collective sense making it just break down and it seems like it is almost there. Liv: Yeah. You talked about like the need for a trillion dollar idea that you putting out there. You want someone to work on this. You basically a system of antibodies for mis- and mal information
combined with a system for positive collective [01:44:00] sense making. So put that into English for me. Jim: Well, it was in English. Liv: Better English. Jim: Okay. So I’ve been working on building what we call the internet for 44 years. I started, I,
my first job in tech was with a company called The Source, which was the world’s first consumer online service. I personally designed our second generation forum system. I personally designed our email system. I personally designed the precursor to blogs, you know, so I’ve been doing this shit for a long time and I loved that stuff
back in the good old days and for the longest time, I would say that the nets were great,
positive value to humanity. And they’re still of great positive value to humanity. The question is, are they great, net positive value to humanity. And I’d say at this point
it’s a mighty close call and it’s getting bad rapidly. And so the trillion dollar opportunity
is how [01:45:00] do we use technology to build for ourselves personal agents
that. Let in what we want to let in and including some stuff that we don’t specify. ‘cause we do want serendipity, right? And you set your serendipity setting, you know, I want 8% really crazy shit. Right? And then
also lets us go outbound very easily. I mean, having just, I don’t travel a lot by design,
but I recently did a fairly complicated bit of travel here. That’s why I’m here in Austin. Jesus Christ is a difficult to use these travel systems today. Why can’t I just think the thought,
you know, plan my trip for me? And of course we could actually do that today almost. It’s very close. And so the info agents use the affordances that we already have wired in,
plus ai plus filtering, plus some human curation. You know, I do think there’s a human curation component in there. And part of the trillion dollar opportunity are curation workstations for people to be able to be compensated for being
curators to feed into the networks. And then to the last part of what you [01:46:00] read
that was one of my rants on Facebook, I think, wasn’t it? Liv: I don’t know. Jim: Oh, I don’t know where you got it from. Liv: Oh, the trillion dollar? No, you tweeted it recently. Jim: Oh, okay. The the other key part of it, which makes it go from the billion dollar opportunity to the trillion dollar opportunity is that
each of our info agents develops, negotiated sense making with other info agents, right? So you, your info agent and my info agent. I say, keep an eye on Liv. See
if she says anything intelligent, right? And then you say the same thing about me, or maybe you don’t say it about me, but it’s via somebody else who listens to me. And,
and eventually that pattern of collective sense making is almost automatically curated. So that, and of course, it’ll never be perfect, right? But in general, you’ll get what you want and what you should have at any given point in time. And almost all the
shit will be filtered out. And if we could do that, the shit businesses will eventually just
go away. And instead we’ll actually have, or it is possible we’ll have a [01:47:00] bifurcation. ‘Cause at least initially, you’ll have to pay money to have the info agent. And
it may be a fair bit of money initially, right? It could be as much as a hundred or $200 a month. And one of the course key rules of at least the rutian
info agent, no fucking advertising, God damnit or almost you know, I, I. Do like the idea of being, being able to personally set a price for an ad, right? Say if you wanna send me an ad, it costs you 10 bucks. Right? I also think that
my wife will report this to be the case every time one of the, you know, some, you know, you go to the doctor and they send, you take this survey, right? Yeah. My law is I’m not gonna do a goddamn thing unless you pay me $10. Right. And each person should be able to set their price and, and the, the guys on the other side will say,
all right, it’s gonna cost me 10 bucks to send a survey to Jim. You may or may not open it. And,
but, but the statistic will, will also be published on how often I open ‘em. So, okay. I, if you pay me the 10 bucks, I’ve taken your survey 83% of the time,
so it’s probably worth your while to do it. Igor: Wouldn’t, wouldn’t [01:48:00] your info agent just fill it out or, yeah. You don’t want to use the cycles? Jim: Well, yeah. May I may or may not. Right, right. It may, it Liv: may be there’s more others. So yeah, the, the, Jim: there could be qualitative questions like,
you know, how was the experience that the doc office and the info agent can’t do that? Igor: So, but, So well or could it, so like the, just to make it bit more specific about, what do
you mean? So what’s currently being designed is more that you are kind of coming into and talking to this ai, or you’re going to this other AI to get some, like products and services maybe
made up and it’s gonna have an agent that’s gonna use, but you’re more describing that each person has their personalized and adjustable, et cetera agent that they develop a trust relationship with. I was recently thinking it would be great if I can I actually wanted to do it this weekend, but just like, fine tune a model on the basis of all of the books that
I would like myself to have read, but don’t get around to that. My friends, the people that I trust constantly recommend that’s like 500 books a year. I will just not read that. Jim: But that would be great. Igor: I could feed it into a fine tuned [01:49:00] model. Jim: Or even better why should you do the work when you know, somebody has created that curation
and you subscribe to two or three curations, right? Mm-hmm. So you don’t have to do all the work, you know, so someone’s you know, say the three areas, say you’re interested in five areas,
you can subscribe to the five people that have built that those sets and Igor: ‘cause maybe then it becomes kind of closer to the thing that
is also problematic with representative demo- where, where it’s now it’s a basket of things. Whereas I, I, I basically, the question is, how much time can
I personally put in to do the selection versus like, what’s the benefit? Right? Jim: What the benefit, right. I’ll give you an,
you know, ‘cause I’m so interested in this trillion dollar opportunity, I’ve actually built little bits of it for myself. And by the way, I am not gonna do it. People out there, I am too old, I’m too rich and I’m too lazy. You know, all you,
if I was 45, this is exactly what I would be doing, right? So a team of 45 year olds and younger do this. If you want some advice from the old warlord,
gimme a call, right? This needs to be done. I want it, but I’m not gonna do it. So, I’m just giving this one to the world. Have [01:50:00] at it people,
right? And that’s the way it is. I’ll give you an example, a little teeny. I’ve done a couple of these little teeny corners of the trillion dollar opportunity. ‘cause, just
’cause I want ‘em I subscribe to, this is embarrassing to admit 117 Substack, right? Liv: Bloody hell. Jim: I only pay for like 18 of ‘em, but I get 117 of Liv: what fraction do you read? Jim: I don’t know. That’s a good question. But to deal with the problem, I created a
AI agent that sucks them all in outta my email box. And I trained a
basically a open AI GPT. Well, it was not quite a GPT, like a GPT
using some of my own technology and, and calls into the open ai API, et cetera. And I, I was able to cheat ‘cause I used the transcripts of all the Jim Rutt show as the
source material to train this thing on. So it now reads on an average. About 75 sub stacks a
day and ranks them for me in the probability that I would like to read them. Oh, nice.
Create summaries at two [01:51:00] levels, a one paragraph and a six paragraph summary. And just because I could it also creates an evocative illustration. And so, when I’m,
when I’m running this thing, I just come in in the morning and it’s got already there, all the substack that have come in so far today ranked in order
by its best guess of my level of interest, a one paragraph summary. And if I wanna click on that, a seven paragraph summary. And if I click on that
the whole thing, and oh by the way, I don’t even have to wait for network latency ‘cause it’s already downloaded the whole thing and has it in memory.
So this stuff is operating at local CPU speed, not, not network speed. Mm-hmm. And how, you know, how long that took me to develop? About 20 hours? Liv: Can I have it for myself? It sounds amazing. Igor: Well, you, you need to put out your own podcast then transcribed as well,
because it’s Right. Like it’s, Liv: well there’s way now I can do that. I’ve got a podcast. I just need the transcripts and Jim: it’s now trivial Whisper x will, will take, make an almost perfect transcript of your audio. There’s the, the one that my producer uses for my podcast is even better ‘cause it’ll, [01:52:00] it’ll put the names of the people on it. Mm-hmm. Whisper X will basically
just say voice one, voice two, et cetera. But yeah, you, it wouldn’t cost much to do a process.
So it’s interesting. I do perf almost perfect transcripts, high dollar transcripts for my
podcast and have from the very beginning, long before LLMs existed, because I hate podcasts. I hate listening to ‘em. I hardly ever listen to podcasts and even when guests that I have on have had most of their recent exposure in podcasts,
I get those podcasts transcribed. Right. Right. I’m a reader. Reading is my thing. Not everybody is that way. Everybody has different modality. And so I had these great transcripts and all the LLMs,
know, a shitload about me and game B and all the crazy shit I’m interested in. It’s scary how much they know. Liv: Oh, I’m well aware. I used one for researching this. It’s very helpful. Jim: I’m, I’m pretty sure that the reason they know all this is because of those transcripts. Liv: Right, right. Igor: Yeah. Tyler Cohen pointed out that his new book, he’s writing [01:53:00] for the AI rather than for humans, because Jim: that’s very interesting. Igor: If you want to have distribution in the future, it’ll probably be as more and more communication happens as human talking to ai,
you actually want to be part of the essence that the AI carries in itself as it talks to you. Jim: Well, that’s a damn interesting idea. I’m gonna have to- Ive been meaning to have him on,
so that’s a good excuse to reach out to him. Yeah. Igor: Also, Andre Carpathy made this other separate point which relates to it which is that. Again, same consideration leads to probably you want
to make your websites over time become more and more AI readable, which has different preferences, right? It doesn’t need images of like high megabytes. You, you actually would rather have text files and various other things there that are like very easily extractable for the ai. Liv: That speaks into some kind of dead internet theory as well. Although it’s not dead internet theory, it’s just like, it’s just like AI internet theory. Igor: It’s, it’s, it’s just I think potentially it’s, yeah, exactly. It’s, it’s more just changing what the i interface of the engagement
is like, right? It’s the human communi- like just like [01:54:00] interfacing with just dead text or are you just have this other meth method of, and I I, again, Liv: we’ll just have different layers. You’ll have, you have your ai, ai readable layer of a, of your website and then your human interface. Igor: You want to be able to come back to the, to the base truth,
but especially while you have often this false information that is otherwise told to you, but for many use cases, this is great to be able to combine it in this way, I believe. So,
a topic that is I think fortunately coming up now more on people’s minds and Anthropic recently published a podcast about it and David Chalmers is now working on it. Rob Long, some of the great philosophers of our time actually is the topic of AI consciousness,
which many people initially have kind of like an reaction that is just like super strong against how dare you even ask the question. Right. But you’ve also looked into this Jim: a lot. Yeah. It’s been, it’s one of been my core interest since 2014. Igor: Do you wanna make quickly the case? Why?
It’s an important question that we do need to wrestle [01:55:00] with. Jim: There’s a whole bunch of reasons. I said, oh, by the way,
and we can now announce I’ve been advising an institute called the California Institute for
Machine Consciousness that’s been under wraps for a couple of years. It’s now come outta the closet and check out their website. I don’t remember what,
I think it’s C imc.ai probably. Mm-hmm. But Yoshi Bach is the chief scientist and executive director who’s a brilliant,
brilliant dude’s, been on my podcast a few times. So anyway, this is something I’m obsessed on. I’m an official advisor to them, and I might even get more involved in that. My
interest in con in machine consciousness in particular is basically has two parts. One
a study of a scientific study of consciousness is famously difficult, right? It’s really hard to understand what’s going on inside your head, right? And self reports can help to some degree. We can get secondary evidence like EEGs,
et cetera but there’s a limit to what, what we can do so fast. And the imaging
technologies are getting better, but not that rapidly, understanding the [01:56:00] dynamic. We haven’t even mapped the networks yet, let alone their full dynamic. So there’s a long way to go on being able to understand. Animal consciousness in a skull, basically. Igor: Well, but also because not all theories would even presuppose that you would find the core
of the, of consciousness by looking at it through imaging. I would. Right. Like we won’t, we don’t
have a agreed upon expert consensus theory of what is the best theory of consciousness for Jim: Yeah. Even in the materialistic realm where we say that it is certainly an emergent phenomena of neurons exactly how that happens. We don’t know,
and we don’t even know how much electrical fields are involved, so, so much shit we don’t know. Right. And you can’t do fine grained experiments on living animals, et cetera. So my initial interest was I’ve always been in, since 1998,
I’ve been interested in the science of consciousness, and I’ve read everything. I’ve had most of the leading people on my podcast. I’ve read most of their books,
hundreds of papers. It’s a domain that I’ve gone extremely [01:57:00] deep in, but I can see that our progress is gated by this difficulty of the, that the interesting stuff is. Almost impossible to tweeze apart. If you could create a machine consciousness, then you’d have no problem metering the hell out of it. Right. Okay. How did it do its consciousness thing,
right? And let’s make this change. Let’s you know, put this Bernays propaganda into its head and see what happens. Right. What is the actual bit rate between perception and consciousness? You know, there’s a lot of controversy about that. Our perception rates and the millions, tens of millions of bits per second. It’s thought that the
rate of change in our consciousness is on the order of 50 bits per second. Igor: But wait, sounds like you’re potentially pointing at a different
thing than what like David Chalmers was, would describe as the hard problem of consciousness. Jim: Ah, well of course this is, this is one of the key questions and the
Rutian position is that there probably isn’t a hard problem of consciousness. Igor: It’s my position as well. Jim: And that, Liv: well, you found a friend finally on this. Igor: No, no, no. There are quite a few of, Jim: there are lots of few. Igor: And Daniel Dennet is, Jim: [01:58:00] sath is another, yeah. There are plenty of people who have sensible views on consciousness. All the rest of ‘em are morons. Not true, not true. And especially Chalmers is quite a, Liv: quickly, just, just quickly, for anyone who doesn’t know it, can someone define the hard problem quickly? Igor: So, oh, well, you go ahead. Jim: Yeah. The hard problem of consciousness is that let’s call it the ineffable aspect of experience. You know, the, you know, the classic example is. The blue of that sofa over there, the actual feeling, Liv: its purple Jim: the sense, was it purple? I’m a guy, right? We only got three colors, right? Got blue, green, and yellow. So it’s blue or purple, whatever. Whatever it is. It
is. But there’s something in my brain that feels, it has all kinds of side effects,
you know, because this reminds me of a French gay whorehouse or something, right? And all those things are all coexistent with the actual feeling in of the blueness. That’s called quaia. And so what is the nature of
qualia is [01:59:00] fundamentally the hard problem of consciousness, Igor: which is on the other side, basically the easy problem of consciousness is
how do you see blue in the first place, right? Like, what happens in your brain, which is also interesting. And that’s what
you definitely find out with imaging. That’s why it’s the easy problem. It’s, and then, but then it’s like, what? It’s likeness. Liv: Right. Igor: Some would say, okay, the, the famous book paper that was started, Tom Nagel, right? Tom Nagel what’s it’s like to be a bat or something that’s called, yeah. Jim: What’s, what’s it like to be a Bat? Which is his very brilliant insight that bats have echolocation. We don’t, so the consciousness of a bat, the quality of the bat Liv: must be very different. Jim: Must be just like, we can’t even imagine what it’s like, right? Liv: So you guys claim that it doesn’t exist. as a problem, we Jim: as a pro problem. Igor: I think it’s a question. So it’s, it’s, I have a bad explanation of why,
but, so there as science progresses you sometimes notice that phrenology
was stupid and you just like discarded and it’s explained by other means, right? Jim: elan Vital, the other [02:00:00] example, the idea that life was some Liv: Vital essence. Jim: Vital essence Igor: So I, and so it’s a question that one always many ask themselves, and I ask myself,
it’s like, what are the today’s sciences that we will in 50, a hundred years look back on and say, that has been better explained by something else? A candidate for that is something that involves a,
the best explanation of what the heart problem is, is like what it’s likeness, right? It’s like, that sounds a little like, we might be talking about some
bs. Maybe there is a better functionalist theory that would describe the specific dynamic that leads to the perception of there being a what it’s likeness. It might be just Jim: I go one step further than that, which is the machinery. Once we understand it, it’ll become obvious that a machine, it’s not actually a machine,
but a device that operates in this fashion will produce the effect of that, right? And here’s my
stupid example. As kids, we used to have these flip books, you know, they were like cartoons, Liv: right? You flip through ‘em and you see an animation. Jim: I still remember the first one I saw was in second grade. It was a guy riding a
rowing, a rowboat across a lake. And so [02:01:00] this isn’t a perfect analogy, but it’s fairly close, is that we’re gonna discover that we’re the guy riding the
rowboat and we’re writing the book at the same time. And that’s just the way it is. That’s what, how modern nature happened to evolve the mac machinery of our brain.
Turns out consciousness is quite expensive, both energetically and in genetic information. So it’s certainly paying its bills, right? Modern nature would not have had consciousness for probably 200
million years, at least if it didn’t have a payoff in terms of inclusive fitness. Igor: But one point about, it’s also why it’s also good to mention the Nagal, what it’s like to be a bat is like we, we are both very close. It seems to like the functionalist theories more
than some of the other ones that said there. It’s it’s still not settled. Right? Like I would only be significantly but not extremely surprised if it turned out to be completely different. Jim: Exactly. And that’s, and that’s the beauty side. Igor: And that’s why I think the way to look at machine consciousness is asking oneself the
question, given this theory of consciousness, can it be given this theory of consciousness?
Could it, could there be machine [02:02:00] consciousness? And it’s not clear, right? Jim: Say, well then that’s why we need to try to build one, right? Because if we can, based on, on premise A, then that indicates that premise A is at least
somewhat representational of conscious. Here’s the other very important move, and most people are so confused about this. I’m gonna try to say it slowly. This is from John Searle, the Berkeley philosopher of consciousness and also other things.
He likes to say that consciousness is like digestion. There is no, you can’t put a finger,
oh, that’s your digestion, right? It’s a, it’s an ensemble of things going on. Your throat, your tongue, your teeth, your esophagus, the appendix, you know,
the small intestine, and then the rut corollary, and it comes out the end. Same as your consciousness shit, right? And, and then he goes further and I’ve gone,
I’ve just stole this and run with it. In the chemical,
pharmaceutical and food industries, they have devices called digestors. And basically you put in rough raw materials and bacteria or yeast and acid [02:03:00] and
reactions occur, and more valuable chemicals come out the other end, and they’re called digesters. And, but they’re not at all functionally like human digestion, but they produce an analogous
result via analog- roughly analogous means, you know, rough chemistry, acid, microorganisms,
heat time equals more valuable product. And so I, I have to keep reminding people that
when we create the early machine consciousness, they’re not gonna be just like Shakespeare, right? They’re gonna be analogous. The machine consciousnesses will be as analogous. To
Shakespeare as the digesters in the pharmaceutical industry are to your
digestion. And so don’t expect them to be exactly like us. They won’t be. In fact, that’s actually the fun part. ‘Cause the design space of consciousness is
probably gigantic or things that we could normally could reasonably call consciousness. And ours just happens to be one particular kind of [02:04:00] constrained definition
that could pay its bills to mother nature over evolutionary time. Igor: I think that’s a really important point because the, like, the thing that
I’ve been throwing back at people sometimes is that say, oh, when you claim that machines can have consciousness, I’m not saying that the current ones do, but if you say that they might,
you’re just anthropomorphizing onto like your kind of assumptions onto them. It’s like, I think it’s actually the inverse. You are anthropomorphizing
the concept of not you personally, but others are anthropomorphizing the concept of consciousness.
There are many other consciousness to be had outside of the human one. Jim: And oh, by the way, we Igor: like the bat, for example. Jim: And there’s one even more radical on earth. Probably the large cephalopods, the big squids in octopi, right? They developed what appears to be something like consciousness,
but very radically different. ‘cause they have eight brains or 10 brains rather than one. But those brains talk to each other and they are unrelated. The, the neural systems are unrelated. And so we may have a very alien consciousness here on earth and machine consciousnesses will be
more alien than that, and that’s okay. So that’s number one. I want [02:05:00] to be able to have. Substrates to work on theories of what actually is the man in the rowboat, in the flip book. How
does that actually work to produce something that people would recognize as consciousness? But the other one, this comes from, frankly, my game B work. A lot of people in and around these
game B adjacent communities full of more blather and bullshit about consciousness than about any other topic, right? And you know, the cosmic consciousness, you know,
all this horse shit, right? And so I think that at least as important as
advancing the science of consciousness is de blathering the discourse of consciousness. And the more we can actually say in firm ways that we can demonstrate and
not just talk about that this is what we know about consciousness right now,
and go interact with this thing. You tell me what you think, if it’s consciousness or not, right here. Here’s voluminous experiments have been done on this thing. And oh, by the way, this is exactly how it works. Here’s the source code, right? And [02:06:00] when people see that, they’re gonna realize,
I think I, I could be wrong on this. I mean, again, as you said, this is an area we, we could all be wrong. Maybe there is a cosmic consciousness, you know, maybe Santa Claus exists. Who the fuck knows, right? But de blathering the insane discourse of consciousness that’s at that’s
very large in our liminal communities itself is hugely important in value. Igor: Yeah, I like that as a take of why it’s important to figure out these questions.
I thought we’re gonna go down to what you’ve said before, which I like a lot. Liv: Well, yeah. I mean, to me the value is, is like we are, we’re building these, these new,
whether you call ‘em technologies or life forms or whatever, but potentially by the trillions,
these, these, these digital potential sentiences. And either we don’t give them sufficient moral valence when they have it, which would be devastating. So we are like, it’s like replic- all the horrors we’ve got of factory farming currently, but then like scaled up massively on,
you know, many orders of magnitude larger and potentially even more suffering capabilities. Or we do [02:07:00] falsely give them moral valence at our own personal cost. You know,
we, we seed our rights to the machines because there’s many more of them, or they, Jim: and they’re smarter, Liv: have stronger preferences, and they’re smarter and blah, blah, blah. And when, when they don’t deserve it and in either direction, it’s a huge thing to fuck up.
So it’s unbelievably high stakes. But that, that is a very funny additional point that
also it would be nice. ‘cause you know, you’re right, the, the term consciousness is used as this like term of convenience almost to just like fit into people’s own moral views. I actually happen to be someone who believes that there are, like, I do think that ecosystems have a form of consciousness, for example. Like,
I do think that shit’s going on in the Amazon rainforest. Like, I think that I would be, I wouldn’t be surprised if there’s form of need consciousness there. Igor: So need, we need to figure out the, this answer quicker so that she can stop talking. Liv: I see your eyes glazing over. Jim: No, no. I wanna make a very important point on this, which I’ll eventually see a reaction to it. And this is again, one of the
great confusions of this discourse around consciousness is people get consciousness and [02:08:00] intelligence confused all the time. Sure. A bacteria is intelligent, right? A bacteria will steer towards sugar and away from mm-hmm. Acid or hot water, right? Amazon is certainly intelligent, right? It’s a very complex adaptive
system that’s confronts collective problems. And through the what the
hell’s the name of the fungal networks, mycelium networks, and other methods. And also through evolutionary time, et cetera, solves problems. It’s intelligent,
but at least the John Searle model that consciousness is a specific architecture
that produces a specific kind, a class wide class. ‘cause it can include tad poles and bats,
but it doesn’t include rocks. And two by fours would say that if you use that ver- version of
the definition of consciousness, the Amazon is intelligent but not consciousness, not conscious. Liv: Sure. But I mean, you’ve referenced like the concept of egregores earlier in the conversation. Jim: Yes. Right. They, they’re not conscious. It’s very important. Liv: They’re not? Jim: They’re absolutely not. In fact, this is one of the very first times I was exposed
to the [02:09:00] idea Brett Weinstein was one of the first founders of Game B, right? Mm-hmm. And he introduced a concept we called Goliath, which was sort of the game,
a egregore. And he presented it as if it’s con was conscious. We had like a three hour group discussion. Where at the end of it, he stood down and said,
you’re right. They’re not conscious. And this is very interesting and important that we can have complex adaptive systems that, that seem to have trajectories without any consciousness. Liv: So they’re kind of like zombies pezo, they’re, Jim: they’re actually zombies. Liv: Right. Which would imply that it, so, and then by that extension,
are you saying that, that therefore it doesn’t have any moral value? Jim: It has implicit moral value, but Liv: because it impacts other conscious agents, Jim: but only other conscious agents, right. Read it doesn’t wake up in the morning and say, I’m gonna be a game baby motherfucker, it just is a game a motherfucker. Liv: But again, do you know that,? Like how, how, how do you differentiate like, you know, the, an
egregore or, or an ecosystem that is intelligent, that it is adapting, it has like these [02:10:00] hyperdimensional sort of things that it, it, it, it, it can be affected by and respond to again? Like that’s, I why, why the confidence that, that is not conscious? Jim: Well, depends on how you define consciousness. And in a very simplistic way,
I would describe at least the, the tetrapod branch of consciousness. That’s the one that comes up through reptiles, amphibians, and eventually
us and birds is organized around a unification of the senses in, in a coherent, smooth scene. Right. The fact that we’re, you’re looking at us, we’re, we’re experiencing this. You’re feeling a rumble in your belly. You feel your toe is a little
bad. You hear that slight buzz over there from the air conditioning unit. That is your conscious scene that requires a shitload of neural support to create that thing. It’s very expensive. And if you don’t have that, it ain’t consciousness of the tertra poital variety. And I will guarantee you a tree doesn’t have that kind of thing. ‘cause
it can’t fi it doesn’t have the right kind of network [02:11:00] architecture. I guarantee the Amazon doesn’t have a thing like that, doesn’t have the right network architecture. So I can say quite firmly that it doesn’t have consciousness as we apply it to the animals that we know. Liv: Mm-hmm. Jim: And, but it has in intelligence and people get so confused by the difference. ‘cause that thing is so intelligent,
how could it not be conscious? And this is important to know about AI also. Igor: They’re a great example of that, Jim: that you, we, this the conversation Yosha and I some other people had starting in 2014,
is, is consciousness a shortcut to AGI? And we’re not sure. And one of the reasons that you know,
the California Institute of Machine Consciousness is gonna work on these problems is maybe it is a shortcut to AGI I’m not sure. I don’t even have a strong opinion about it one way or the other, but I am reasonably sure that we can create AGIs without consciousness. And
we can probably even create ASIs without consciousness, and maybe we should. Igor: I think part of [02:12:00] the, also why there is a natural impulse to ascribe
consciousness to these like big systems is that in some way, in living things, at least there is a, at least a correlation, but certainly not a causation of how complex and how
many neurons there are in a brain with how likely one would be to find pro consciousness within it. There’s certainly a correlation because it just, if at at least like according to like
various functional theories and, and like the question that you po or the point that
it is expensive to develop it, so evolution wouldn’t have done it initially. It would’ve done at the point when you can afford it and you already have satisfied various other needs Jim: and you’re getting a win in every generation, right? Igor: Yeah. So you don’t expect like, like muscles to have consciousness,
for example. They’re just like they neurons are just too few to, Jim: they’re not organized the right way and Igor: they’re not organized like that. Yeah. So, but yeah, there is basically,
but we do assume that at, at the point of some neocortex and like, just like various complex systems that a brain has, you can expect [02:13:00] there may be an
organization that satisfies the thing that we at least would describe as consciousness, which is related very closely to attention or perception or something like this. Yeah. Jim: Well, in fact, the way I use it is that this animal consciousness of the sort that involves a
conscious scene. The actual consciousness is attention is the cursor of consciousness. Igor: Mm-hmm. Jim: That we’re seeing a new conscious scene about every 25 milliseconds,
40 times a second, plus or minus, of course, graining. You can switch attention at about once every six frames. Most people don’t. And the brain says,
switch attention or don’t switch attention about once every 250 milliseconds and the creation of
the frame. And then us walking through what object has attention is actually what consciousness is. Igor: Hmm. Which one do you ascribe most to it, it sounds like close to attention schema theory as well. Jim: I’m not sure I know what attention schema theory is. Igor: Oh, that’s Liv: it, it is that, that’s what he [02:14:00] described is, is almost exactly that. Let’s not get, let’s not get into that. Yeah, let’s not, Jim: I’m gonna read up on that. Igor: You’ll love it. It’s Michael Graziano. Have, have you heard that? Yeah, certainly. It’s, it’s
almost exactly what you described, but in any case, it’s probably worth saying, saying, even though Jim and I speak with, to it, with kind of like a, oh, it’s definitely not that, and we
feel like it’s this, it is still the case that we don’t currently know and we don’t know with ai. And you’ve said, well, why I think it matters. And I think the thing that we would find insufficient is if the argument of why AI don’t have
consciousness was simply because they’re on a different substrate, because they’re carbon based silicon based, whereas we are more carbon based. I, I don’t. Like that as an argument. Jim: Yeah. I think searle kind of demolishes that, right? So long as you understand, we’re talking about comparison machine consciousness by analogy and not by congruence,
right? Mm-hmm. In the same way that digesters in the food industry, they do different things. They’re made outta different materials [02:15:00] made out of aluminum, typically. And, and they have different, they use yeast instead of bacteria, but it’s still digestion, right? And as long as we understand that that’s what
we’re talking about with computer or machine consciousness, then it,
it starts to bogle credulity to say that you couldn’t do it just ‘cause it’s silicon. Mm-hmm. Unless we’re all wrong and there is a cosmic consciousness that permeates the universe that only biology can tune into. It’s, you know, it’s possible,
I would bet large amounts against it, but at this stage of our knowledge, it’s a hell of a lot more than it was 40 years ago, but it’s still very incomplete. Igor: So I think we’ll invite a guest on like Chalmers also to discuss that further because we,
we have to wrap up. But like, I want to just extend on your point that you said before that even, it’s actually even more important that people think about
it because. Yeah. I think undoubtedly over the next years, more and more people will
actually ascribe consciousness and moral relevance to AI models like Liv: they already are Igor: chil pe [02:16:00] people already did that. You had Lemoine, et cetera, but, and children are, have like a high propensity to just like do it with dolls, everything. Jim: Oh, very dangerous. Very. Yeah. Igor: And, and, and this could just be really, really bad and I think we should have a confident ish at least answer to, no, this is wrong or it is there. Jim: Right? Yeah. Dan Daniel Dennet laid out an essay, I think it may have been even in the New York Times, a very high profile where he suggested that it should be a felony for anyone
to create a AI that was able to convince that, convinced people that it was human.
And that should actually be a very serious criminal violation. Not quite up to the level the but Larry and Jihad, but close. Liv: But it feels crimes crime against humanity adjacent. Yeah, yeah, yeah, yeah. It really does. Yeah. I, I completely agree. ‘cause you like Jim: and I would be okay with that. So in other words, and think about operationalizing it, you know, you could require, you know, a chat bot every 90 seconds to say, I’m not a human. I’m not a human, Liv: I’m a machine. Right. Just a reminder, I’m not a human, I am a machine. Exactly. You don’t need to be mean to me but Jim: designed probably [02:17:00] by, , but it’s probably a game, evil game a motherfucker assign me to extract the nickels outta your pocket. Right. That’s what I would require it to say, but probably won’t say that. But that’s a very, very important point. Yeah. Is without some discernment about these things,
we could be really sucked into a bad place. I mean, Eliza, the famous
AI therapist, it was like a thousand lines of code or something. Graduate students became
addicted to it in the late sixties, you know, that emulated a Rogerian therapist, and they didn’t,
I don’t think they described consciousness to it, but they reacted to it as if it were a person. Liv: Yeah, no, I mean, you’re seeing, I mean, the, the, and even worse, they’re now hijacking,
you know, the male sex drive, which is like, good luck. Anyone, any young man
resisting the temptation of like, these, you, you’ve already got the OnlyFans model, which is again, coming, bringing us back full circle. It’s all about dopamine hijacking, right? But it’s ai, it, it’s not even real humans at this point. It’s not AI tech, it is not a human writing. The text, [02:18:00] the images of these girls are increasingly just
gonna become a AI generated, they’re not even real women that they’re looking at. And the,
these companies are making, you know, they, it’s set up so easily for men to just donate. Just, oh, take your top off, take your this off, whatever. Do this, do this, do this. Jim: Yeah. Rats hitting the bar for more cocaine. Liv: Yeah. It’s, it’s the ultimate rat, you know, rat heroin thing. So if they had to constantly keep saying, you know, if we make it a crime to have a misleading this, I, you are not speaking to a, a real
human being right now, that it might be like the last assemblage of keeping a human in the loop. Yeah. Jim: Something like that. Something like that. There’s something about those. AI girlfriends has creeped me out at like an ex extremely high level of creeped out inness. Right? Liv: Yeah. I mean, there’s something to, again, like if you’ve got the sort of, as you said, you know, the, the, the deepest, darkest incel who can’t go out and find any companionship anywhere anyway. Sure. If that gives someone some meaning and a sense of companionship, I’m all for it. But it’s more like, again, it’s, it, it’s the, there were, [02:19:00] it’s not like
it’s just going to be them that get sucked into this. As these become more and more effective and we forget what it was once like, to actually really interface with real humans,
and especially across generations, we, we are just gonna get more and more detached from like, what even no, no one will know how to interface with humans anymore. So yeah. It’s, it’s quite the slippery slope. Igor: Let’s not do that. Liv: No, let’s not do that. Jim: But, but we, but we will,
unless we don’t Alright. Unless we consciously choose not to. Okay. Liv: So to finish up, we like to do this with our guests. Okay. A
series of rapid fire predictions. I want you to be a system one here as possible. So like, just whatever your intuition is. Jim: Okay. And you’re gonna ask me questions and I’m gonna fire off and ask, I’m gonna Liv: Yeah, exactly. And just give you a numerical your probability. Jim: Okay. Liv: Okay. Probability that you go back to using a flip phone in the next two years. Jim: 30% Liv: probability that the concept of cognitive sovereignty becomes
a mainstream topic of public discourse. Similar to the concept of privacy today by 2030, Jim: 20% Liv: probability that you break your social media six month [02:20:00] sabbatical this year, Jim: 20%. Liv: Probability that the dominant global economic model shifts significantly away from GDP growth as the primary measure of national success by 2050. Jim: 50/50 Liv: That’s confident. I like it. Probability that game B,
adjacent design principles are explicitly referenced in governance documents of at least a hundred significant DAOs or intentional communities by 2030, Jim: 30%. Liv: Probability that a game B proto community of a thousand people exists by 2030, Jim: 80%. Liv: Ooh. Confident probability that game game B proto community of 1 million people exist by 2040. Jim: We, on the math 30% also. Liv: Okay. Probability that you personally will live full-time in a proto game B community for at least one year by 2035, Jim: 50 50. Liv: [02:21:00] Probability that current, the leading current AI models are conscious Jim: zero Igor: probability that one of the functionalist ish materialist theories is as an accurate,
is accurate about, as a description of consciousness, Jim: 80% Liv: probability of AGI by 2030, Jim: 40%, Liv: and by 2040. Jim: 98%. Liv: Okay. Probability that the hard problem turns out to be false. Jim: 80%. Liv: Awesome. Thank you. Just to finish up, what’s the most win-win thing that you can point our audience towards
as a takeaway from this conversation that you would like them to go and do, Jim: to actually do? Mm-hmm. When you take, oh, there’s a simple one. When you think about liking or retweeting or sharing something,
stop and think, is this good? Is this take, take an [02:22:00] omni, considerate view?
Is this good for all the above yourself? The people who would, will receive this
signal and the human race? And if the answer isn’t yes to all three of them, don’t do it. Liv: Love it. Where can people find you? Where, where do you wanna steer them to, especially while you’re on your social media sabbatical? Jim: You can get me at you can send me dms on Facebook or Twitter. I do log in
once a week when I’m on sabbatical to collect my messages. You can also hit me at jimruttshow.com. Liv: I highly recommend it. It’s a great show. And if people wanna get involved in game B, Jim: Go to the Facebook group on the Game B Facebook group. Liv: Awesome. Jim: And if you wanna do some reading, go to game B Wiki and you can read some
somewhat older materials, but still some fascinating stuff. Liv: I’ll include a bunch of these links in the show notes as well. Jim, thank you so much for coming on. Jim: Thank you. That’s been amazingly good time, well spent, lots of fun. Fun giving me a few things to think about. Liv: Thank you. Well, thank you for coming on. Win-win. Sweet. That’s [02:23:00] the, that’s the goal. Jim: Make it a win, win, win-win, win-win. That was great. Liv: Thanks Jim. Jim: I think it was the best podcast appearance I’ve ever done