Denizen

Redefining Progress with Alex Randall

Episode Summary

How do we define progress, and how do we go about achieving it? How are our current notions of progress catastrophically problematic, creating much bigger problems than they solve? What would authentic progress look like, and how might we go about achieving it?

Episode Notes

Resources:

Episode Transcription

[INTRODUCTION]

"Alex Randall (AR): The way we think about what wisdom is, as opposed to what simply knowledge is, is an important foundational set of principles for how we define progress. That we want to be embodying wisdom in our pursuit of progress in the sense that when we are wise, we are aware that there are wider consequences of our action. When we are wise, we have a sense that the choices we make might be serving narrow goals, as opposed to holistic goals and they might be harming things we value and need. These are all the foundational principles that underlie wisdom and maturity in humans and the same thing needs to be developed in terms of our relationship to progress and how we pursue what should be a civilizational betterment, rather than just narrow-tech progress."

[EPISODE]

[0:00:44] Jenny Stefanotti (JS): That's Alex Randall. He's an Editor at The Consilience Project. They are one of Denizen's partner organizations, who publishes important articles that help us make sense of the world today. This is the Denizen Podcast. I'm your host and curator, Jenny Stefanotti. In this episode, we're talking about progress, specifically exploring the question, what is progress and how do we achieve it?

A couple of months ago, The Consilience Team published an article investigating the narrative of progress, where and why what we currently deem as progress is catastrophically problematic. They actually call it fake progress, because it creates much bigger problems than it solves. The article goes on to articulate what authentic progress looks like, and gives us specific suggestions about how it could be realized. A critical point here is understanding how background narratives about progress can play a profound role in creating the biases that shape our assessments of new technologies. You can understand the importance of this inquiry right now as companies race to develop AI under investing in safety in the name of competition.

It's a great read. I highly recommend it, but you can catch the key ideas in this conversation with Alex. It's worth noting to our listeners that Daniel Schmachtenberger, who I'm sure many of you are familiar with, is one of the founders of The Consilience Project. This paper and conversation reflects his current thinking.

As always, you can find our show notes and the transcript for this episode on our website, becomingdenizen.com. There you can also sign up for our newsletter. I bring our latest content to your inbox, alongside information about online and virtual Denizen events and announcements from our partner organizations. I'm doing some IRL salons and recordings of that are shared to our newsletter subscribers. If you're interested in content beyond the podcast, you can find it there.

This is a deeply thought-provoking conversation that asks critical questions we often bypass. I'm grateful for the work that Consilience is doing and honored to have the opportunity to bring it into a podcast format for you. If you listen to the intro of this podcast, it says, we don't really go up to 30,000 feet and really ask the deep questions and interrogate our sacred cows. I feel like with this conversation, we're going up to 50,000 feet and asking the even deeper questions that we tend to not necessarily surface in the conversations that we've been having as we get into the practicalities and the details of what a new system might look like. But this has some very deep questions about progress. At the highest level, the question is, what is progress and how do we achieve it?

I remember when I was in grad school, we learned all about how to make economies grow. It was a program about poverty in the developing world. I was really underwhelmed that we never asked is progress. What are we trying to do here? I felt it was really a compilation of the means and the ends. Amartya Sen talks about this in Development as Freedom. It's one of my favorite books. I'm going to welcome Alex to the podcast to interrogate at a very deep level what progress is, how do we achieve it, what's wrong with progress as we think about it, and how we might flip the script. Welcome.

[0:03:53] AR: Thank you very much. Thank you for having me.

[0:03:55] JS: The Consilience Project is the organization that you work for, one of our partner organizations. You put out really important papers that help us make sense of the world. Help me understand the progress paper in the context of the work that Consilience does, before we get into the paper itself.

[0:04:12] AR: Sure. Yeah. Thank you. The Consilience Project is a publication of the Civilization Research Institute, which is a nonprofit think tank, that is focused broadly on catastrophic and existential risk. The Consilience Project was conceived of and begun back in COVID times. We wrote about some of the great challenges to making sense in the 21st century. We wrote about propaganda and information warfare. We wrote about how facts can be used to mislead and how using cherry picking and decontextualization of facts can allow someone to promote a propaganda narrative, rather than represent what's actually real in reality.

Then we put out a regular cadence of articles. It actually got a very big audience very quickly. It resulted in this inbound to the think tank that was producing us. It was a small group of people. That meant that we were actually able to work on a bunch of really important projects that were right at the core of our strategic focus. The cadence of publishing at The Consilience Project has changed, and it's now just a place where we will put out big ideas that we've been working on in the grand scheme of many other projects that are going on in the background. One of the most important subjects that we wanted to address on Consilience was the idea of progress.

It's actually hard to think of anything more important in terms of the great concepts that define civilization. Our idea of ourselves, where we're going past, present, and future. I think that it would be fair to say that our idea of progress is upstream of all of the greatest problems we see in the world now. It's not too hard to get a sense of the fact that most of the great problems we face now, whether we're talking about climate change, or species extinction, or the risk of nuclear war, all of these things are the unintended outcomes of our attempts to solve other problems.

In our attempts to pursue progress, we're making worse, often more complex problems as a result. All of these great anthropogenic risks, no one intended for these things to happen. No one wanted species extinction at scale and the risk of the collapse of ecosystems, but that's what we're getting through our pursuit of progress as we define it now. 

We had an intention to write about this and put an article together that attempted to critique the idea of progress and outline an alternative, a different way from it. In our attempts to write it, we realized just how many relevant concepts there were that needed to be built out. It did turn into a very long paper. Anyone reading it will quickly get a sense that the fact this is more a small book. There's a lot in there.

[0:06:50] JS: Yes. I mean, I largely feel like a significant service that I provide is I do the reading, so you don't have to. Or so, at least you can get it in the podcast episode, the very least. For me, and I think actually for many of the listeners of this podcast, it was a pretty quick read, and I think an important read. I actually really encourage people to take the time to read it, because I think it's worthwhile, and I think it has a lot of important things that we'll touch on. I really think it does resonate so deeply with me, as I spoke to my disillusion at Harvard, the conflation of progress and how we achieve it, the focus on growth rather than what it is that we're trying to achieve with the growth and not interrogating, I think, the challenges with the assumptions that we're making about how we achieve it.

You speak in the paper to this eloquently, and I love the history and I can't wait to get into that part. The first question I have for you is just, what is problematic about how we define progress today? You speak specifically about the progress narrative. Can you explain to the audience what that means?

[0:08:00] AR: You described yourself at Harvard getting into the state of disillusionment, right? What you are feeling then is I think what many people encounter when they really make the effort to see what is happening in the world as a result of the progress we pursue. Whether in development, or whether you're in product development, you tend to define your goals pretty narrowly. We want to achieve a certain goal in this region that lifts this many people out of poverty, or gets this much food into the region. We don't tend to think of all the unintended outcomes that might flow from the decisions, or actions that we're taking.

If you're at Harvard studying how to improve growth, then you're not looking too closely at all of the ways that we're undermining the fundamental systems of nature that growth is built upon. You can feel that doesn't make sense in the long run, right? We're just ignoring the things that we don't want to look at too closely. The progress narrative, as we define it here, is this pervasive idea in contemporary society that technology, science, institutions of education and research are the fundamental drivers of improvements to human life and markets as well, of course. The developments in modernity, basically, are the fundamental drivers of an improvement to human life.

It's a very important set of memes in the cultural landscape right now, because it is the driving force behind all of our accelerating tech development. We're now in a position where we're creating technology that is exponential in its scale and power, right? It's growing faster than it has ever grown before at an exponential rate, and in terms of its ability to affect reality. When you scale technology that way, you have to expect that the unintended outcomes, the side effects, the negative externalities that flow from it will be similarly exponential. That's not a game you can keep playing on a finite planet.

[0:09:48] JS: As I was asking about why now, I mean, I also feel like, there is an urgency to have this conversation because of what's happening with AI. What you're seeing happening with artificial intelligence, the effective arms race of companies. I mean, Jesus, last week, now, Sam Altman is compensated in equity and all of the beautiful incentives that open the eye, attempted to put in place at its inception. What it was about are completely obliterated by the incentives of the system.

So much of what I talk about on this podcast centers on the market and capitalism and the incentives of the market. I really appreciate how your paper speaks to the ways in which this problem that we're seeing with progress and the way that we define it and the way that we go about doing it. This persists in the public sector. This persists in the non-governmental sector, which is just about doing good, not with the misaligned incentives of the market, which is doing good in a way that's myopic and still spills out externalities, because the aperture of the lens is too small. I think that's just a really critical point, which is to say, and we can talk about the ways in which effectively the challenges in the small aperture are exacerbated by the incentives of the market. I think that's what you're seeing right now with AI. The incentives are really pushing towards speed at the expense of safety, at the expense of analysis, which takes into account second and third order effects. The problem is actually even at a higher order than the market incentives, which is how we think about the consequences of the things that we do.

I think you make a really important point, and I want to make sure we don't miss it around some examples of some indicators that we use to indicate that things are better than we think. Hans Rosling wrote this book called Factfulness, which makes this point of we have all these doomsday conversations and narratives, and yet, everything's getting better and let me show you a bunch of trends and data that indicate that. You make a really important point to the paper that that is misleading.

You give some examples. You talk about life expectancy, you talk about poverty, you can give a couple of examples where in detail you say, “Well, this metric looks good, but let me tell you why this isn't actually the full picture.” I don't want to get into all of them, but just quickly just touch on, you talk about life expectancy and quality versus quantity that's not assessed in that metric.

[0:12:07] AR: The reason this is a really important point to talk about is because often, the classic examples of progress that we are given are designed to stop us thinking too closely, or looking too closely at the reality. If you're presented with graphs that show post-industrial increase in life expectancy, that is supposed to win the argument that undeniable progress has been achieved. It's another example of cherry picking. It's taking a fact, presenting it as representative of all the information that you might want to know about the state of the world, where actually the world we have, the world we're living in, the world we're embodied in is rather different from the one that that graph points at.

What we tried to do in the paper is show some really clear examples of how decontextualized some of these facts are. While it is true that life expectancy has increased over the last 200 years, at the same time, we've been toxifying the environment with hundreds of millions of new synthetic chemicals with really complex unknown effects in the body, many of which cause cancer, disrupt our endocrine systems, impact our fertility, and we're seeing that play out now.

It's also just assuming that the most interesting, or relevant point about life expectancy is number of years. If we live the extra 10 or 20 years that we get in a state of loneliness, or depression, or anxiety, or pain, is that really progress? Is that a comparative betterment of the state of the human being and the way we live? Most of the evidence that we have that we can point to shows that, in fact, our quality of life is in decline across all age groups. People are less happy, people are less trusting of institutions in each other, people feel lonelier, and there are higher rates of anxiety and depression than ever before, and they are increasing. It's particularly painful to look at the effect in young people. The rates in young people have been increasing dramatically over the last 20 years.

If you look at life expectancy and offer it as the only metric worth considering in the context of progress around life expectancy, but you're not considering the life you're living with those years, that's not a particularly holistic representative view of the progress you're pursuing.

[0:14:13] JS: I love this point in the context of the increasing fixation on longevity. There was recently a Disney Plus series called Limitless, that Darren Aronofsky did with Chris Hemsworth, that was ostensibly about longevity, which Chris was really interested in. They work with Peter Atia. Across the first five episodes, it's almost like this reality series with Chris, where first he's doing fasting, and then he's doing extreme cold exposure, and then he's doing extreme feats of strength, and you're fixated on, “Okay, this is what I can do to live longer.” Then you get to the sixth episode, and it's about death. Chris didn't know what was going to happen, but they took him into a set of an elderly community. They put these prosthetics on him, so that his body felt like he was aging. They had this big event, they put his wife in all this makeup, so she came out looking like she was in her 80s, and then they brought him into a death meditation.

For those of you who aren't familiar with the death meditation, it is a meditation on your death, and you're literally envisioning yourself lying there with your body slowly shutting down, and interspersed through it are these questions about how you lived your life. Chris comes out of it, and he's just like, “WTF am I doing?” I love it, and I think it's the most brilliant series, because the entire audience is totally caught up in this cultural narrative that some of you who are caught up in around living longer, totally disconnected from whether they're living intentionally. The death meditation confronted Chris with, “I'm not living intentionally at all.”

I think it's just to underscore your point around life extension and life quality, totally different things, and often, we're looking at the wrong metrics, I appreciate that. Now that we've made that point, I want to turn to the history of the scientific method, the critique of the scientific method. For me, this is where the paper, I think, got really interesting. The ways in which our notions of progress are deeply tied to the scientific revolution and how we think about making sense of the world, and choosing what to do in the name of progress.

I think, let me start with this point, which for me was, I think, one of the biggest ahas and takeaways from the paper, actually, where you spoke to the third versus the first-person perspective.

[0:16:36] AR: Sure, thank you. Yeah. When the scientific method was first emerging as part of the scientific revolution, it was never really considered as a method for knowing everything. It wasn't presumed to be a worldview that could tell you everything that exists in reality. It was considered that there were domains of study, or thought, or perspectives that were not reachable by the scientific method, and these things include religion and the mind. Over time, we have gradually lost sight of that separation and domains.

What science does is that it looks at the world from a third-person perspective. It allows you to take measurements of something and then form a hypothesis and then make predictions. What it doesn't do is tell you about other really important aspects of reality, such as what it's like to be you. Some of the most important phenomena associated with being alive, with being real. It doesn't give you anything about your internal state. It doesn't tell you about what it feels like to be in love, what it feels like to feel connected to someone.

What it allows you to do is measure certain components, or proxies of something that is a first-person experience. It's missing access to a really meaningful part of reality. There are loads of different examples to then draw out this distinction between first person and third person. But one of them is, imagine you are measuring the brainwaves of someone in deep meditation. You can, as a scientist, you can measure the amplitude and frequency of the brainwaves. You can show that after a certain period of time in deep meditation that those brainwaves change. What that doesn't tell you is anything about the experience, the quality of the experience of being in the state of deep meditation.

You may be able to show that meditation affects the brain. It can't access that internal perspective on reality. More importantly than that in a way is the fact that it can't tell you anything about the second person, the sense of relational meaning between people. As we also go into this paper is we talk a lot about the fact that there is a commonly missed perspective on reality that we are in deep dependence and reliance upon each other and the systems that sustain us at all times. Value emerges in that second person domain between people, between beings. It can't access that either. While science is a really useful scientific method, the scientific worldview is a really useful method for building tools, manipulating reality and building models to help us understand what we experience in the world, it can't access some of the most important and meaningful aspects of lived experience, like what it is to feel meaning, what it is to feel purpose, what it is to feel fulfillment?

[0:19:14] JS: Yeah, and this is one of the quotes that really struck me in the paper. If the lens through which we view the world optimizes for the third person and misses the first and second person aspects of the world, we are likely to make choices and take actions that fail to serve and protect the things we value most. It's so good. Many of us have read the more beautiful world our hearts know as possible. This just underscores that our methods of sense making don't consider that by design. I won't say our method of sense making broadly, but the scientific method doesn't take those into account.

[0:19:53] AR: The scientific worldview, it can be broadly thought of as a materialist reductionist worldview. In order to understand the universe, you have to start somewhere in studying the universe, right? You can't start with the whole, so you'd start with a part. It is in the breaking down of the whole into the part that we come to label things as individual objects, as separate, totally separable things. When in fact, nature doesn't create those things as totally separate and independent. They are all interconnected in a way that the process of studying it totally deconstructs.

It's useful to think of the fact that nature doesn't make a human heart, right? Nature doesn't make a liver, or a spleen. It makes all of these organs in the context of a whole body. It's only in the process of breaking down the human body in our process of studying it, that we decontextualize it. We take it out of the whole system and we call it a separate object. Whereas, there's no such thing as a separate heart in nature. They only exist in the body.

[0:20:53] JS: Yeah, this brings me to a Donella Meadows quote. Might not be the last one I throw you, but I’m sure you're familiar with her work. I love her so much. She says, “There's no single legitimate boundary to draw around the system. We have to invent boundaries for clarity and sanity, but they can produce problems when we forget that we've artificially created them. There aren't no separate systems. The world is a continuum where we draw a boundary around a system depends on the purpose of the discussion.” She's spot on and that we're just not thinking about the externalities, to be clear around it.

I just so appreciate this point, so I want to say it again. There's just dual ways that the scientific method is leading us astray. One is it's the reductionism that comes from the myopia that results in – I'm just looking at this little piece of something that's actually complex. You get all these externalities just because the lens is too small, but also the necessity of the use of the third person perspective as an external observer means that you're just not considering all of these other things. There's two huge components of reality. Everything outside of the myopic part of the system that you focus on and the actual felt experience of the humans that you're ostensibly designing something to support that are not directly considered in the default way that we think about doing progress.

[0:22:19] AR: When you try and take a step back and think deeply about where our idea of progress has come from, you really do have to go quite a long way back into deep history, long before measures of GDP and things like that, long before the industrial revolution. There are many different places you can point to as being the origin of the idea of progress. Some of the most meaningful points people would tend to bring forward as the origin of the concept of progress, as we know it now, involve things like, nutritional surplus, early forms of agriculture that allowed groups of humans to store food in a way that allowed for expansion of population. With expansion of population, with specialization of certain bits of your population and increasing military power development.

When you have more food and the ability to store it, then that food can be targeted by other tribes as something to take from adversarial competition dynamic. That emerges from that process. Another thing that's really important is the advent of writing. The earliest writing cultures had to have a way to track things like, early forms of debt and accounting. With that, we had a means of transmitting the ideas that were carried in battle. Nutritional surplus with the expansion of military activity between early groups of humans, and the form of telling the stories that justified the outcomes in battle. Then you end up with a forward-moving idea set, the start of a mean set that underlies the progress narrative we have now. Eventually, led into the enlightenment and the scientific revolution as we have it.

[0:23:57] JS: Got it. That early narrative that actually started from shifting from hunters and gatherers to settling and having surplus was just underscored and accelerated by the evolution of thinking with the enlightenment and the scientific revolution.

[0:24:11] AR: Yes. It's important to note that there's obviously a parallel development in religious thinking that occurs with the idea of progress, as we just outlined it. Hunter gatherer way of life to a settled agrarian way of life, to an industrial way of life. Alongside that, obviously, there were philosophical developments that changed the way we think about the world and our place in it. Over time, we've had an increasing secularization. We have developed through the scientific method a way of viewing the world that leaves less and less room for God as we previously had those ideas.

One of the consequences there is that over time, you get an increasing weight towards the scientific worldview as a way to explain everything. Because the scientific worldview is the thing that is giving you competitive advantage in the dynamics between various nations, or in between competing groups. As your success at a technological endeavor, an advancement in the narrow sense increases, then you are generally coming to a place in which you are overcoming your rivals, simply through greater population, greater resources, greater ability to specialize in the military domain, and your ability to win warfare, and win at the level of markets as well.

[0:25:26] JS: Yeah. The worldviews of the groups who outcompeted became instantiated.

[0:25:30] AR: Exactly. It's the idea that history is written by the winners is a really useful frame for thinking about history, writ large. Of course, there were many other cultures that existed, but that have been wiped out, completely eliminated, for whom we have no learning anymore. We've lost all of the culture and beauty and knowledge that those cultures brought, because they were less good at fighting and winning wars, and they were less good at generating material surplus and advancing in technological terms. Of course, the global civilization that we live in now is the product of that process over the long arc of history.

[0:26:07] JS: What is so critical for our way out of this, going back to indigenous wisdom that was lost and reintegrating the wisdom of it, the orientation of it, the relationship with nature that's inherent and centered in it. I think that understanding the ways in which that intellectual and cultural legacy was discarded and lost and why is a super valuable point that you make in the paper. We'll start to turn to where you go from here soon. I do want to talk about techno optimism. You say that that's the contemporary version of the progress narrative. If you haven't read the techno, I'm sure you've read the Andreessen Horowitz Techno Optimist Manifesto, which I truly thought might be a joke, because it was so unbelievable and over the top. Let's just talk about techno optimism briefly.

[0:26:54] AR: I saw your tweets in response.

[0:26:55] JS: Oh, you did.

[0:26:57] JS: Yeah. That has made the rounds. That blog post made the rounds. Yeah.

[0:27:00] AR: They were very pointed and definitely well taken. Yes. Techno optimism is what you could call the contemporary instantiation of the progress narrative. I think in light of the recent advancements in a bunch of technologies with the potential to really impact the future. In AI and synthetic biology and other exponential technologies that have the potential to change the world as we know it, we've had a polarizing of the debate around the value and utility of these tools and the risks of these tools.

The techno optimist perspective is very closely aligned with the broader progress narrative as we've known it. It's the perspective that advanced technology in combination with human ingenuity is the thing that will deliver us into a future of abundance, so that it can solve all of our problems. It's effectively the worldview that says our problems with climate change, we can solve those with planetary-scale geoengineering, our problems with disease, we can solve with nanotechnology, our problems with collective coordination, we can solve with AI and the super intelligence.

[0:28:03] JS: Every time someone talks about geoengineering, I just have this visceral reaction about how much we don't understand the biosphere in nature and unintended consequences of that manipulation. It just makes me shiver. I think it's a perfect example of why this line of thinking goes awry.

[0:28:19] AR: Planetary-scale interventions in the complex system delivers our weather and our climate is the hubris that can cause a lot of problems. The techno optimist perspective now is effectively taking a very narrow view on reality, focused on narrow technological advancement, saying that if we can increase the power and efficiency of our tools and we can expand the accumulation of human knowledge about the universe, there is a problem that cannot be solved by that approach.

I think there are a couple of key problems with that worldview. Obviously, we've been talking in general about it, but it doesn't account for the scale of externalities. If you were going to increase the power of the technologies that you're going to deploy without doing sufficiently careful assessment of the risks and not thinking through the unintended consequences, you're going to have similarly exponential externalities that will eventually break the biosphere as we know it. We can't keep doing that.

[0:29:12] JS: Why does this persist? Part of it persists because of the incentive structure. This is obviously what I speak to a lot, but let's talk about perverse incentives and why the perverse incentives that we see in the system are not adequately addressed by the government, which is supposed to address perverse incentives.

[0:29:32] AR: Perverse incentives in the context of the risk that flows from new and advanced technologies. We talked a little bit about AI development and the arms race that we're seeing play out at a global stage now. We could also talk about things like, the network effects that deliver monopolistic outcomes in tech markets. There are many incentives to rush risking new technologies out, so that you don't lose the race. You hear it lose all of the funds you've invested in developing a new product. We see perverse incentives all over human systems of organization. When we organize at scale, we tend to embed perverse incentives in our institutions and across society. This is one of the fundamental drivers of the complex interconnected global risk landscape that we have now.

When it comes to tech development, it is the defining feature that forces us to rush forward, without taking sufficient care. It's very easy to use the call for care as a pejorative. Like say, “Oh, you're just being a doomer. You are advocating for a slowdown. When in fact, we need to speed things up.” I think it's important to consider the fact that this is effectively a perspective of holism. We're trying to take a step back and look at the whole that is being affected by the decisions we're making in deploying certain advanced technologies. Failing to do that is a willful blindness. It's a form of social opportunity that refuses to look at the wider effects of the decisions that are being made.

[0:30:57] JS: I really appreciated the points that you're making around intrinsic versus extrinsic motivation. We have an economy that is driven by extrinsic motivation from our need for money. I do Denizen, because it's fully intrinsically motivated. I happen to need to get paid for it to keep doing it. I'm intrinsically motivated, but so much of what we do is extrinsically motivated by capital. In fact, the most soul sucking jobs pay you ridiculous amounts of money in many cases, because people wouldn't do them. Otherwise, that motivates to do them. I just really appreciated the point that you make around intrinsic versus extrinsic motivation that economic incentives are perverse, that the government is ostensibly supposed to correct for those perverse incentives. But because of the correlation between money and power, these are all the ways that the market has captured government through lobbying, through campaign finance.

I hadn't thought about it as clearly, but when you said it in the paper, the fact that people come from industry into government, which means the culture of industry infects government, but also, people want to come out of government into industry to get paid, because they don't get paid well in government. There's a cultural cyclicality that is actually really critical that I hadn't thought about in terms of the ways that the market corrupts the government in terms of its decision making. I just think that's such a critical point around why are systems of governance that are supposed to address this issue around a gap between our social preferences and market outcomes breaks down.

[0:32:34] AR: We try to detail, as a light overview, all of the many ways in which the market can capture the state. The market capturing the state is the problem that stops the state being able to enforce perverse activities in the market. You were talking about the concept of revolving doors there, the idea that as you climb the ladder working in government, at the end of it, you're quite like a nice job, perhaps, in one of the industries that you were helping to regulate while you were working in government. But if you spend your whole time in government, enacting really stringent regulations and enforcement on the issues –

[0:33:06] JS: Nobody likes you.

[0:33:07] AR: You're not going to get the cushy job at the end of it. It works the other way as well. You could talk about campaign finance, influencing politics. You could talk about public-private partnerships. There are so many models by which the market has innovated ways to capture the state, so that the state either doesn't enforce, or fails to capture some of the mechanisms by which the market is damaging the biosphere, damaging human health, damaging human mind. There are many routes that the government really cannot keep on top of in a post-industrial society.

[0:33:42] JS: Yeah, the fact that technology is moving too quickly for a government to keep up, to be able to regulate. They don't have the – Yeah. I mean, I just appreciate that you made those points, because I think they're critical and I make them all the time. Also, I just think it underscores what we spoke to already, but I want to say it again because it's important, which is the way we do progress, the way that the scientific method crowds out first and second person perspectives, the way that it is myopic in its measurements, and so it leads to all these unintended consequences is distinct from, but hyper-charged by the market and the market incentives, which are more focused on growth and around profit, etc. Then the ways in which you get this really interesting and tragic reinforcing feedback loop with money and power and the corruption of governance, and then you get more money, more power, more influence, widening inequality over the last four or five decades. I think that's a really important point.

I want to make one more point, because I thought this was super interesting. I hadn't thought about it. We ask this question, why does this persist when the winners are so few and everyone else is stagnating, addiction, mental health? Why is there not a revolution taking place? You spoke about Stockholm syndrome. I thought that was a really important point, so I want to make sure we make it.

[0:35:03] AR: I think one of the things you touched upon there was the fact that the market benefits by the narrow definition we have of progress now. When we choose to optimize against very narrow metrics and pursue profits in that way, then it allows us to not spend too much time caring, or thinking about the other effects that might be caused in reality. This process played out over many years of many generations, leads to a world of increasing inequality. One of the things that we tried to bring out as much as possible is progress for whom? Progress at what cost? Those are the questions that are not fundamentally answered by the progress narrative as we have it now.

It's obvious to see that the benefits of progress as we think of them, the wealth, the comfort, the improvements in lifestyle, those things are improvements in access to healthcare and education even, those things are not distributed equally over society. There are certain elite groups that have access to those things in abundance and then there are many billions of people that don't have access to those things at all in any meaningful way. The idea of Stockholm syndrome is a useful psychological model for thinking about why it is that people are okay with the world system even when it's pretty obvious that we're in a state of decline.

There are loads of internet memes around how kids now can't afford houses when their parents could on far lower salaries. There's an obvious felt sense in it would seem, there's this obvious felt sense that people understand that there is a phenomena of decline that we are experiencing, but we're not seeing the revolution you pointed to. Part of it is because we need to resolve the cognitive dissonance. We have to identify with the system that delivers us the potential to have improvement, the potential for betterment, so as to not have to exist in a state of insecurity, or a state of lack of safety. Stockholm syndrome is just a useful narrative device to point to the fact that people can identify with the thing that is holding them captive in order to feel safe, to feel secure.

[0:37:01] JS: You also make this important point around, just to cope with the grind of Western society. We turn to all these dopamine addictive mechanisms to cope, or society offers that to us. Social media, sugar, porn, all of these things that keep us in addictive cycles and in some ways, also, potentially keep us from interrogating the systems that we inhabit. Although, certainly very – it can be easy for it to just feel too daunting and overwhelming to even think about it.

[0:37:32] AR: The one thing I would say about that is that when you put that into context, you take a step back. You can see quite clearly that we have generated a world system that systematically disconnects us from each other, from nature and that it sells us a solution to our dissatisfaction in the forms of addictive hits of pleasure. Whether it is the sugar, or whether it's fast food delivered to your doorstep and all of these things are driving outcomes that no one wants. Whether it's scrolling on social media, which has collapsed our attention spans and decreased the level of complexity and public debate. You can point to so many effects of the systems that now demand our attention and determine how we spend our time. We have effectively traded the real things that matter, the intimacy, the connection, the meaning for the tokens of status instead. That's what the addictive hits of pleasure allow us to hide from.

[0:38:26] JS: Yeah. Yeah. All right, so now for the good news. You say, real progress would require internalizing externalities, binding social traps, rethinking our approach to problem solving advancement in technology more generally. I appreciated this point that you make this point around, our notion of progress is immature. You draw this analogy in human development and the maturity in humans as our brains evolve. You say, as with maturity in humans, maturity in relationship to progress necessarily involves caring, noticing, and then making changes to address issues identified. I got pretty excited that you use the word caring, because the Denizen community's definition of the world that we want to move towards, we very carefully chose three adjectives, and they are just, regenerative and caring, which is less often seen. Let's speak a little bit more to this analogy that they're making with maturity and then get into what it really looks like.

[0:39:31] AR: It's interesting, because there are two important threads here. One is that the model of maturity is a useful frame for thinking about what's wrong with progress and what we need to be doing to make it right. They're going from being immature to being mature. There are a bunch of other frames that you could use that would say the same thing. You could call the progress we pursue now fake progress, right? Calling itself progress, but actually, it's just ignoring all the bad stuff and ignoring the trajectory, the trajectory of our global civilization and hoping and fingers crossed.

[0:40:01] JS: That's definitely fake progress, or misguided.

[0:40:03] AR: Immaturity is a useful model for thinking about it. But the caring comes through being connected to actions. There's this systemic problem we have now where it's very hard to care about the consequences of our actions, because we're so disconnected from them. The supply chain that's producing the laptop that we're speaking on now, we are disconnected from the costs of those things. And so, we lack the world to care about them. I don't have to see the child workers who had to mine the cobalt for this battery. I don't have to see the rainforest destruction in the Congo. I don't have to see the people who have died from the toxic synthetic chemicals that run off into the biosphere as a result of the manufacturing process. Because I can't sense those costs, I cannot care about them. If I cannot care about them, because I can't sense them, then I'm not going to be called into any action in response to them.

The immaturity frame is useful, because it points to the fact that progress as we define it now is limited and it needs to go through a whole series of stages of development to become mature. It needs to take account of its costs. It needs to realize and care about the fact that the way it is at the moment is damaging and undermining things that we need to survive. The point we make is imagine when you're a toddler and your parents are taking care of you, you can be horrible. You can shout and scream and have tantrums and refuse to eat your dinner and do all of the things that toddlers do, because you don't have a world model yet. You are too immature. You don't have a world model that understands you're harming the people who love you and the things that you need to survive. You rely on the love and the generosity of your parents to support you when you're being an asshole.

The progress narrative as we know it now is that immature version, right? It needs to mature. It needs to move into a place where it understands the consequences of its actions, that it understands the scale of its externalities, understands that it's harming and undermining the systems of nature that we all depend on, whether we like it or not. We might think that we're going to be uploaded into the cloud as digital gods, but you cannot do that without a healthy, resilient earth that supplies all of the materials that make the system that make that happen, even if it is technically possible.

[0:42:10] JS: I just got this book, The Evolving Self, Robert Kagan. Have you heard of this? It's super interesting, because it has this mapping of human development. Jessica Fern actually talks about this in Polywise. I actually talked about consensual non-monogamy and why it's relevant for systemic change and how it helps us move through our own evolutionary process. It's pretty fascinating just in the context of the point that you're making, because it talks about these earlier stages, where we're toddlers and we're teenagers and we're very self-oriented. Then we move into this state of being socialized. The vast majority of adults sit in this state of the socialized self.

The socialized self is indoctrinated with the current system, the current set of values. They don't question it. Self-worth comes from bank account, from resume, the yada, yada, yada. Then some people move into what they call self-authoring. This is more like, I am returning to something that is intrinsic about me. It's very interesting, because you make this point around connection and disconnection. It's like, we're in this moment of, to quote Charles Eisenstein, the story of separation of disconnection. In moving from this socialized self to self-authoring state, which the minority of humans currently are in, it's a consciousness thing that is evolving as out of the immature story of progress that you're talking about to a more mature story of progress. It's a consciousness shift that needs to happen globally. They move us into that more sophisticated way of thinking about progress. I just wanted to bring that in, because I think it's really interesting.

You had some unexpected things to say about optimism versus pessimism. Toxic optimism versus toxic pessimism. You are advocating for more pessimism. Tell us what you mean by that.

[0:44:02] AR: I would say that there are some simplistic and relatively toxic versions of both optimism and pessimism that are really prevalent at the cultural level in the world right now. That what we need is an awareness of the dialectic between the two, and the various versions of these dispositions of these traits, of these ways in which we relate to the world, where we have a sense of the fact that these impact how we think about reality and the future we get. If we are pessimistic in an unhealthy way, then we can be frozen in an action. We can feel disconnected from an ability to change the world. We can feel nihilistic. We can feel separate from agency.

There are obviously unhealthy forms of pessimism that are really prevalent. We mentioned the word doomer out here. The doomers that are targeted by techno optimists as being down about the future and therefore, won't be involved in building the future. That's an unhealthy pessimism. The healthy pessimism we need is an awareness that we do have an effect on reality and we need to take account of it. It's about caring enough. It's an expression of care for reality and a desire to make sure that we're modeling the world properly, so that we don't mess it up. On the unhealthy side of optimism, then we're talking about the willful blindness we mentioned. This willingness to just turn away from problems and pretend they're not there and just focus on the upside all the time. That is a toxic optimism that is actually going to harm things that we need to survive.

A healthy optimism is in fact, knowing that pessimism is useful sometimes. An example that I think is particularly enlightening here is how to think about strategy. It would be really silly to just be optimistic about your strategy, because then you would blind yourself to all of the ways in which you might fail, all of the pitfalls that I fall into. Whereas, obviously, you want to be pessimistic about a strategy, because then you will be looking out for the ways in which it might fail and you may be able to mitigate them in advance.

There are healthy and unhealthy forms of both optimism and pessimism. What we need at a civilizational level is an awareness of the dialectical relationship between the two, so that we manage an outcome that drives us towards actual holistic civilizational betterment and not just this narrow technological advancement.

[0:46:14] JS: Yeah, appreciate that. Okay, now you get into some like, a little bit more brass tacks around, okay, what does this look like? Talk about prudent problem solving. It made me think of this Einstein quote. You probably know which one I'm about to say. “If I had an hour to solve a problem, I'd spend 55 minutes thinking about the problem and five minutes thinking about solutions.”

[0:46:36] AR: That's perfect. It's the tagline of this whole idea, right? Prudent problem solving is something we've developed at CRI that will shortly be publishing in a formal way on our website, and is designed as a principled set of steps. Really simple, principled set of steps that allow you to take a step back from launching into a technological solution to a problem. We've talked a lot about incentives. There are many incentives to look for technological fixes to solutions, right? Often, that leads us to the position where we end up creating worse problems down the line.

What prudent problem solving does in a very simple way is ask you to look upstream. What's the origin of the problem? Make sure you're defining the problem well, first off, then what is the origin of the problem? It asks you whether it is actually a problem that needs to be solved by technology, or whether it is something we need to relate to differently. One of the examples Daniel has mentioned many times before is perishable food. Is the perishability of food something that should simply be solved by technology? Or does it actually teach us important lessons about being connected to nature and about death and decay? There are loads of other examples. I won't go into them.

It asks you, the prudent problems are in the process, simply asks you to think about situations in which the problem you're trying to solve is solved naturally, or doesn't occur, and then try and take lessons from that and apply it to the problem at hand, or asks you whether there are existing technologies that can be repurposed that have a known safety profile. These are all ways in which we can avoid making something new that might harm other things that we haven't thought about properly.

[0:48:04] JS: Yeah. You outlined this five-step process of prudent problem solving. But I appreciate the first one, which is identify the problem you're trying to solve, and you're trying to get to the root, which I appreciated. This is the spending more time on the problem to make sure it's the right problem. There's so much effort in the impact in social enterprise space that's still marginal, extractive capitalism. So much good intention that just doesn't actually yield what we ultimately want for life on this planet, not just human life, but all life on this planet, and it's because it doesn't take the time to ask if you're solving the right problem.

[0:48:43] AR: It's a hard thing to really look at too closely, because it does hurt, right? You think about the scale of human energy and ingenuity that goes into solving some of our great problems, and then you come to a place where you realize that actually, we're making very little progress. Look at climate, for example. I mean, you've got the stated goal of the largest and most wealthy nations on earth, trillions every year spent on it, and we're still extracting more fossil fuels than we ever have done before year-on-year. We're clearly not solving the problem. Having that moment where you check in with reality, and you think, “Just hang on. Are we actually doing something here to solve the problem at hand?”

[0:49:19] JS: Yeah. I mean, you say it really well. Focus instead on upstream causes that would allow us to consider whether our goals might be best served by addressing the origin of the problem, rather than problem we see before us. There's the problem we have before us, which is what's going to happen if we don't pull carbon out of the atmosphere, stat, right? That's a climate change problem that requires immediate attention. But the climate change origin, which is why I don't talk about climate change on this podcast, the incentive structure and the broken governance structures. If you want to talk about climate change, actually, let's talk about the systems, right? I do think there are some immediate ones, too. We won't get into all of the steps, but they're definitely worth checking out. I will put them in detail in the show notes so that people can get them there. You also talk about axiological design. Tell us what that means.

[0:50:03] AR: Yeah. There are a whole bunch of design philosophies that underpin some of the tools we're developing to help us solve problems in a way that either internalizes externalities, or just gets us to think in a totally different way about how to solve problems, in a way that doesn't do what we see now in our pursuit of progress. Axiological design is design that is formed in the service of values. Axiology is the study of value, basically. It's interesting that when you actually take a step back and look at the things that do seek to generate positive externalities rather than negative externalities in the world, fundamentally, what we're talking about variations on ecological design.

It's because ecology is the largest scale system that has done all these things. It's developed closed loop solutions to problems and provide it through itself in a way that takes account of its fundamental balance sheet. We use terms such as synergistic. Synergistic design is an approach to finding multiple solutions to a single problem, or having a single solution have multiple positive benefits. Then there is ontological design and recognition of the fact that what we design designs us in return. That there is reciprocity to the design of everything in our reality and that the things that we make shape us. Yeah, as we said, ecological design. Design inspired by the natural world and an acknowledgement of the fact that most of the problems and the processes for internalizing externalities have been done in nature before and we can be inspired by that in our design approaches.

The tools that we're making are trying to take these principles and apply them to globalized industrial society and try and work out how we can start generating the outcomes that lead to less harm over time.

[0:51:42] JS: You talk about yellow teaming. I want to make sure that we get that concept in this conversation, because I think it's important.

[0:51:49] AR: Yellow teaming is something that we are also working on developing, which is a formal process for thinking through the total externality set of something you're developing, whether it's a new technology, or a product, or a project at scale. It was inspired by the practices of red teaming and blue teaming, which is it worth me just describing what those are. Red teaming was a military strategy game. It was an approach to taking the adversarial perspective on something that you're trying to protect. It was, how can you break or corrupt the thing you're trying to protect? You get a group of your guys, you make them be the red team and it's their job to attack in all the ways they possibly can, the thing that you're trying to defend.

That process has got really popular in cybersecurity. Cybersecurity consultancy firms come in and they red team your digital waterfront, like your servers, everything they can get access to. They try and hack into your conference calls. Then afterwards, they produce a report of all the ways they can break your defenses. So, that’s red teaming.

Blue teaming is trying to protect something. Instead of taking the adversarial perspective, it's how can we safeguard this thing we care about? These two team approaches have found a pretty solid home in product development. Yellow teaming is the idea that, rather than staying within the narrow perspective of the product, or the plan, thinking about all of the ways in which your idea, or your project, or your technology will touch reality over the full course of its lifetime. It's basically this guided process for walking through a set of questions that ask you how it's going to affect the environment, how it's going to affect ecology, how it's going to affect plant life, animal life, the oceans, how it's going to affect human psychology, human health, how it's going to affect existing problem sets, how it's going to affect the incentive landscape, how it could be corrupted, or broken, or weaponized. The idea of it is that it will help you to this principled place of understanding, where you realize that the thing you're doing, in some cases, maybe you shouldn't be doing it at all. In other cases, it will allow you to design mitigations that actually deliver a better product for both you, as the owner of that idea, or that project, but also the world and future generations and your children and all the things that actually matter to us, right?

Yellow team is basically just a formalized process for externality mapping. It's something which we're building prototypes for this approach, we're trying to find ways to test it out in various market sectors. Yeah, it's exciting.

[0:54:18] JS: It's exciting and I appreciate it. I'm going to bring in one of my favorite Donella Meadows quotes again, because I want to hear how you're thinking about this, because I super appreciate what you're saying, which is we need to account for all these externalities. We're still talking about a complex system, where we can't. This is a quote from Donella Meadows. I'm sure you're familiar with her work. She's one of my heroes. “People who are raised in the industrial world and who are getting confused about systems thinking are likely to make a terrible mistake. They're likely to assume that here in systems analysis, in interconnection and complication, in the power of the computer, here at last is the key to prediction and control. This mistake is likely because the mindset of the industrial world assumes that there is a key to prediction and control. But self-organizing nonlinear feedback systems are inherently unpredictable. They are not controllable.”

“They are understandable only in the most general way. The goal of forcing the future exactly and preparing for it perfectly is unrealizable. We can never fully understand our world, not in the way our reductionist scientists has led us to expect. For those who stake their identity on the role of the omniscient conqueror, the uncertainty exposed by systems thinking is hard to take. If you can't understand, predict and control, what is there to do? Systems thinking leads us to another conclusion, however, waiting, shining, obvious as soon as we stop being blinded by the illusion of control. It says that there is plenty to do, but it is a different doing. The future can't be predicted, but it can be envisioned and brought lovingly into being. Systems can't be controlled, but they can be designed and redesigned. We can't search for with certainty into a world of no surprises, but we can expect surprises and learn from them and even profit from them. We can't impose our will on a system, but we can listen to what the system tells us and discover how its properties and our values can work together to bring forth something much better than could ever be produced by our will alone. We can't control systems or figure them out, but we can dance with it.” It's good, isn't it? Have you heard it before?

[0:56:22] AR: I have heard that before, yeah. You're reminding me how brilliant Donella Meadows is.

[0:56:26] JS: She's the best. I'm just curious, yes, we need to think in a more complex way and consider externalities, and I want to get actually before we close, which we'll do soon into how we might actually instantiate that beyond yellow teaming, because I have some ideas myself. We're still not going to get to some theoretical ideal where we can really fully predict all the effects.

[0:56:51] AR: Nothing that's been said here. Yeah.

[0:56:52] JS: I'm not saying you can. I'm just curious how you're resolving that reality in your own thinking within the team.

[0:56:59] AR: The first thing to say is that we are doing a terrible job of it now. We are doing a terrible job of thinking about the unintended outcomes of new technologies. Of course, we could do a lot better. If the incentives in the system were structured in a way that we didn't rush things out, and we did have time to do adequate thought work to think through the consequences of our decisions, then we'd see a very different world. Just because it is not currently incentivized and it's not currently done, it doesn't mean that we couldn't go a great deal further and mitigate a lot of the harm that we currently see. For sure, we're never going to get to a place where we understand everything. We could generate a much better picture, probably with not that much input in terms of thought energy.

All it takes is, I think, a little shift, and we will be thinking about the way that we develop new technologies and insert them into ecosystems of technologies in a very different way. I think that when you consider the scale of investment and the passion with which products are conceived of and launched into the market and rushed out in as part of a race, it's hard to imagine the scale of the benefit that even slight pauses, or a slight redirection of resource into careful thinking could actually produce, but I think it could be really, really significant.

[0:58:17] JS: This question of what progress is, it's a philosophical one in many ways, but certainly informed by culture and values, and those things are very mutable. I think in a healthy society, there is a heterogeneity of opinions and preferences around what progress is. If we're talking about externalities where we don't have an omni-win situation, we have to talk about trade-offs. I might feel differently about it than you would. Just curious what you're thinking about, the what is progress component of it and how you think about an evolution of progress that takes into account systems of sense-making and decision-making that account for the philosophical questions and the heterogeneity of point of view is about it.

[0:59:00] AR: It is of course a key question. I think that you can speak quite clearly to some aspects of it, and then it's a little murky in some other areas. One thing that you could say is that there are some obvious aspects of society now that the vast majority of people would say are just clearly not good. This judgment over what is good and what is not, what is the good life, what is true, good and beautiful? A society that is full of addiction, that is high in rates of addiction, is probably a good indication that you are not driving towards a progress that is actually representative of holistic betterment.

It's funny, just before we spoke, I was thinking about the definition of progress. I realized how often, even in the definition, if you were to look up in a dictionary, there is this conflation between advancement and betterment. You can say that there is narrow technological advancement in every iteration of the iPhone, or in the new laptop you've just bought, or in the new microphone you've just attached to your computer. You can say that tools are improving in terms of their performance, their efficiency, their capacity to do things in the world. But often, that isn't representative of actual betterment.

The smartphones that are getting better and better technologically, that are advancing technologically, are harming our kids. It's not hard to see that. We see rates of teen suicide increase alongside number of hours spent on the smartphone. Yes, the smartphone of representative of narrow tech advancement. It is not representative of holistic betterment. It's just fascinating that even in the strict definition of the word progress, we are conflating two very different things, two very different and dangerous to conflate things. I get to what you're saying, there's this challenge of articulating the value set that should underlie progress for everyone. Of course, it's not easy. There is a difficulty there. But there are some things that we can say clearly are not progress. The destruction of the ecosystems that we rely on, the poisoning of the oceans, the toxification of the environment, such that we all have an incredibly unnatural chronic disease burden. These do not fall within the value set that anyone would say is representative of actual betterment for civilization. These are bad.

[1:01:13] JS: A 100%. I think Eisenstein lays it out pretty well in The Beautiful World Our Hearts Know Is Possible. I think that this connectedness. Again, I spoke earlier about living authentically. When we're connected in a spiritual way to some one consciousness, if you believe in those things, I do think that the attributes and what progress does naturally spill out from that. I mean, I really appreciate that you outlined a design process that helps us address the right problems and address the right problems with solutions that have more awareness of externalities if possible, and help us to move slower.

I mean, there are some other things that came to my mind as I'm thinking about, okay, well, how do we instantiate this more mature way of doing progress? One is the design process that you mentioned. I think one, this is what comes to mind for me. Let's just acknowledge with humility, the limitations and fully doing it, right? One of them is the design process. The other things that come to mind for me are stakeholder governance. I think that's really valuable. I am going to represent various stakeholders, so that the externalities that will affect that group will be represented in the decision making of the company, of the non-profit, of the governments, etc. 

I'm actually soon going to do a series around moving from an anthropocentric to an eco-centric legal systems and governance structures, so when you talk about rights of nature, not just rights of humans, right? Or when you talk about someone representing nature on a corporate board, I think that's a way to represent the externalities in the decision making and internalize it, right? Obviously, resolving the tension and incentives from profit versus purpose. But when you have non-economically incentive stakeholders driving decision-making in companies, you start to do that and instantiate that into government structure. This is just where I am seeing the rubber hitting the road.

I think design process is really important, too. I really appreciate that you outlined that. I think really interesting is also the cultural changes that are needed. I think I've seen some examples, where even with the governance structures that ostensibly are the right ones, you still have this cultural indoctrination. I think the culture of what better is, or what advancement is, or what the market is, so those rewriting those narratives I think is really important. Then you also spoke to consciousness changes. When we are connected, we are caring by nature. I think the consciousness shifts that I think return us to what is our more natural state, our more fulfilled state, or we're not turning to social media and sugar and all those other things to placate our sense of unfulfillment.

I actually really appreciated that we spoke to unresolved childhood trauma. That is often driving a lot of behaviors, because we don't have that connectedness within ourselves. This is just some things that came up for me that weren't outlined in the paper, but we're talking about instantiating decision making in our organizations, or in ourselves. Like, stakeholder governance, design processes, cultural changes, consciousness changes are the components that came up for me.

[1:04:31] AR: I think you're entirely right. I think that if we are honestly reflecting on what it would take to build the idea of progress that could deliver us into a more resilient, sustainable future, then we're talking about changes at all levels of civilization, where we're talking about changes within the individual, the level of development of, at a personal development level. We're talking about changes in our institutions and our systems of social organization to upgrading institutional thinking, to become aware of some of the perverse incentives, to become aware of some of the scale of the externalities that they're failing to map. We're talking about changes at the level of culture, the level of how we tell our stories about ourselves and the future and what matters and what a value set might be.

Of course, at the level of our tools and technology, one of the papers I'd also love to highlight that we wrote at The Consilience Project is about tech not being values neutral, that tech encodes values into these users. I'd encourage everyone to go and read that if they can, but I totally agree with you. We're talking about a fundamental change in many of our ways of relating to ourselves and the world around us.

[1:05:42] JS: Also, underscores for me why the pillars of the Denizen inquiry are what they are, why culture is one of them, why consciousness is one of them, why technology is one, but it speaks to how they all interrelate in these systemic outcomes that we care about. I want to close with wisdom, because I think it's value. You talk about wisdom versus knowledge.

[1:06:05] AR: It's interesting, because going from the sense of trying to map the scale of the change required across the different structures of civilization, right from the individual up to the highest-level micro structures, wisdom can be mapped across the same set of structures, right? Knowledge is the things that we know about our world and wisdom is having a sense of the ways in which that knowledge influences decisions and influences sound judgment, right? At the level of the individual, we think about wisdom as being the process by which we go about choosing which goals are good goals. It's about goal selection, rather than goal achievement.

In the way we've seen progress now, we don't do that. We don't have a sense of which goals are good goals. We simply try to achieve and achieve in an increasingly optimized fashion. Wisdom at the level of new tech development and the benefit of civilization needs to think about some of the things you typically see in wisdom processes. You think about wisdom cultures, one of the things that they tended to practice was restraint, right? The knowledge that not all goals are good goals, and that some things that you might pursue in the near term might be meaningfully detrimental to things that you value in the long term.

Also, for example, try to instantiate the fact that concentration of power is not a good thing to result in, because when you have a huge inequality of power, those in power tend to want to hold on to their power at the expense of the powerless.

[1:07:25] JS: It's a really important point. Yeah.

[1:07:27] AR: It is. I think the way we think about what wisdom is, as opposed to what simply knowledge is, is an important foundational set of principles for how we define progress. That we want to be embodying wisdom in our pursuit of progress in the sense that when we are wise, we are aware that there are wider consequences of our action. When we are wise, we have a sense that the choices we make might be serving narrow goals, as opposed to holistic goals, and they might be harming things we value and need. These are all the foundational principles that underlie wisdom and maturity in humans, and the same thing needs to be developed in terms of our relationship to progress and how we pursue what should be a civilizational betterment, rather than just narrow tech progress.

[1:08:07] JS: That's totally the quote that I'm going to start the podcast with, by the way.

[1:08:13] AR: Direct [inaudible 1:08:13].

[1:08:15] JS: Okay. Well, thank you so much for joining us. Thank you so much for writing this. It's a really important piece. I'm excited to encourage people to read it. I'm glad that we captured a lot of the key points here, but I really think it's worth reading. I just appreciate that because this is, I think, I feel like, it's a very accessible paper. I don't feel it’s not – It doesn't go too out there that anyone can't read it, so I appreciate the comprehensiveness and the importance of this conversation and the work that you're doing to advance it.

[1:08:38] AR: Great. Thank you so much. I really appreciate you putting this idea set out there and being willing to go deep and discussing it, because I think it is really important and it's so valuable to have this conversation in the public domain and start to get a sense of the depth of the influence of our idea of progress on us and who we are and where we're going.

[OUTRO]

[1:08:56] JS: Thank you so much for listening. Thanks to Scott Hanson, also known as Tyco, for our musical signature. In addition to this podcast, you can find resources for each episode on our website, www.becomingdenizen.com, including transcripts and background materials for our most essential topics like universal basic income, decentralized social media, and long-term capitalism. We also have posts summarizing our research, which make it easy for listeners to very quickly get an overview of these particularly important and foundational topics.


 

On our website, you can also sign up for our newsletter, where we bring our weekly podcast to your inbox, alongside other relevant Denizen information. Subscribers are invited to join our podcast recordings and engage with the Denizen community in our online home, The Den. We're partnering with some incredible organizations at the forefront of the change that we talk about. We share announcements from them in our newsletter as well.


 

Finally, this podcast is made possible by support from the Denizen community and listeners like you. Denizen's content will always be free. Offering Denizen as a gift, models are relational rather than a transactional economy, enabling Denizen to embody the change that we talk about on this podcast. Through the reciprocity of listeners like you that we are able to continue producing this content. You can support us or learn more about our gift model on our website. Again, that's www.becomingdenizen.com. Thanks again for listening, and I hope you'll join us next time.


 

[END]