Daniel Schmachtenberger and the “War on Sensemaking”
Excerpts from Daniel Schmachtenberger’s Rebel Wisdom Interview
If you’re reading this, you’ve likely listened to Daniel Schmachtenberger’s interview with Rebel Wisdom regarding the “War on Sensemaking.” If you haven’t, scroll down to watch the video. Rebel Wisdom introduced me to Daniel Schmachtenberger, and his perspective shed much light to understand the current infodemic and the continuing state of collective sensemaking.
For the most part this entry will be to refer to excerpts from the interview for further understanding of Daniel’s perspective. This interview is packed with so much perspective, it’s a lot to take in. I resonate with much of what’s being said, and I’m taking the time here to parse through what’s been shared. Let’s dive right in, as Daniel gets into into the thick of it very quickly.
Daniel’s take on what most people call “news” is mostly propaganda. It’s “narrative warfare” for some agency. He says “they aren’t good sources of sensemaking. You would hope though that there are high signal, low noise sources of true information, maybe scientific journals, like academia and science itself.”
Daniel says, “I hoped this a long time ago. I had continuous disappointment, I can’t trust the news to be true, because news is narrative warfare. And I can’t trust science without actually looking at: what was the methodology employed, how was it funded, what were the axioms used? Am I seeing all of the data or the cherry-picked data?”
“Where are the high signal, low noise sources of information that I can offload some of the high complexity of making sense of the world to?” He continues, “The answer is really sad. I don’t know any sources that are high signal, low noise across lots of areas. Why is that? And what would it take to fix that. What would it take to make a world that had an intact information ecology.”
“Well that requires understanding why the current information ecology is as broken as it is. And we’re starting to touch on a couple of things here, but this thing goes deep. How do we make good choices, if we don’t have good sensemaking? Well, obviously we can’t.”
Can We Fix Our Sensemaking?
“Due to increasing technological capacity (right, increasing population multiplied by increasing impact per person), we’re making more and more consequential choices with worse and worse sensemaking to inform those choices. Which is kind of running increasingly fast through the woods, increasingly blind.”
“How do we actually fix our own sensemaking?” Daniel continues, “this is creating more discussion around this. Part of how we work with our own sensemaking is that we recognize the cognitive complexity of the issues that the world faces more than a single person can process. A single brain cannot actually hold that kind of complexity, so it requires collective intelligence and collective sensemaking. But I can’t just offload the cognitive complexity to some authority because I can’t trust that they are actually doing good sensemaking.”
“Maybe they’re doing good sensemaking within a very limited context, but then the application of that outside of the context is different, and maybe there’s even distortions within their context. So I have to try to find people who are really endeavoring to make sense well. Which means they have to understand what causes failures in sensemaking. And then we have to see can we create relationship with each other that removes the distortion basis that is normally there.”
“From what I have seen of Rebel Wisdom, this is probably the strange attractor that is bringing everyone to watch it. Is people who are trying to make sense of the world better themselves. And trying to find sources of content of other people who have been trying to make sense of it well.”
“Looking forward to getting into why we have as broken of an information ecology as we have. And what it would take to correct that at scale? And how we can make sense of the world even in the broken information ecology now in terms of practical processes.” Daniel admits, “I’ve never actually shared publicly these types of frameworks before, so this feels fun and exciting. And I hope that it’s useful.”
Don’t Dumb It Down
“There’s a famous quote by Einstein: “Make things as simple as possible, but not simpler.” As simple as possible, really the goal is: as clear as possible. But simpler would mean it’s wrong, like it’s not accurate anymore.”
“You know, you’re going to have to face this doing media work. There will be pressures on you to say, hey, people can’t pay attention to more than sound bites, you gotta make it 5-minute chunks, the word size is too big, make it for an 8th grade level, right. Which is saying, people are dumb, so spoon feed them stuff that dumb people can handle.”
“Those are the pressures for anyone doing broadcast… even with good intentions. If we want people to actually be able to make sense of the world well, you can’t do it in very short periods of time with lots of distraction, and oversimplify it.” He points out, “We mostly don’t read books.”
“Have you read the references in Buckey’s books just to show the shit that he read in reference to make sense of things well. And so there’s a de-coupling of the sense of the agency possibly with what it takes to do it.”
“Everybody wants to be buff, but nobody wants to lift heavy-ass weights. Everybody wants to win, but nobody wants to work harder. There’s something like that here. If I want to be able to make sense of the world well, I have to work at that.”
“Attention requires being trained, just like muscle require being trained. Thinking clearly requires being trained. And any time there’s a hormetic process — hormesis is the principle by which you stress an adaptive system to increase its adaptive capacity — so I have to stress a muscle to get a muscle to grow.”
“You have to stress a system, in order to grow a system — in a particular kind of way, not all stressors are going to grow the system. But this is definitely true cognitively. If I keep paying attention to hyper-normal stimuli that are moving quickly, so that the stimuli have lots of novelty, I’m going to be decreasing my attention. But if I want to have any type of nuanced view, I have to be able to hold multiple, partial views in working memory.”
To Defer or Not To Defer
“It’s not that some people have good memory and good attention and other people don’t intrinsically (anymore than some people are buff and some people aren’t intrinsically) It’s developable. But it has to actually be developed.”
“So the impulse to say hey, make something simple so everybody can get it AND the impulse to say, help people actually make sense of the world, are different things. Now some people will make stuff technical-seeming intentionally to obscure it, as a power game. So that to encourage others to defer their sensemaking to them.”
For instance: “I understand this complex shit, you’re not going to be able to understand, so defer authority to me.” Daniel clarifies the dilemma: “If we actually want to empower people, I don’t want them to defer their sensemaking to me. But I also don’t want them to do lazy, shitty sensemaking. Or defer to anyone else. Which means I want them to grow the quality of their own sensemaking, which means to grow the depth of their care, right, anti-nihilism.”
“To grow the depth of their care, to grow the depth of their earnestness. Their own self-reflexiveness to pay attention to their biases and where they’re sloppiness in thinking. Their own skills and capacities, to grow attention span, both the clarity of their logic and the clarity of their intuition, and the noticing of when something is coming form logic and when something is coming from intuition.”
Our Broken Information Ecology
“The information ecology is that there’s a whole ecosystem of information, right? Like we have information coming in from marketing, information coming in from government sources, from campaigning, from just what our neighbors tell us and our friends tell us, from social media… and we use information to make sense of the world to make choices. Lined with what’s ever our goals are and our values — what’s meaningful to us.”
“And what we hope is that the information around us is mostly true and representative of reality, so we can use that to make choices that would be effective. When I say broken information ecology it means that we can’t trust that most of the information coming in is true and representative of reality, and will inform good choice making.”
“So where does information come from? Signals are being shared by people and by groups of people that have shared agency like corporations and governments and political parties and religions and whatever, right… We want to start getting into: why do people share information other than just sharing what is true and representative of reality. This is actually a really key thing to start to understand.”
Determining What Is True
”The difference between true and truthful. This is the first important distinction. When we someone’s being truthful what we mean is… if you’re being truthful with me, it means that what you’re sharing maps to what you believe. Right, that there’s a correspondence between the signal that you’re communicating to me and what you believe is true.”
“So we can look at breakdowns of truthfulness which is where people are distorting information with some intentionality. And that can either be through overt lying, or through lying through omission, or lying through emphasis bias. Those kinds of things. So that’s how we’d describe “truthful” and this is how we’d describe “true”.”
“When we say something is true, what we typically mean (and this gets very nuanced… I’ll put that on hold for now.) In general if we say that someone’s saying something that’s true, we don’t just mean that there’s correspondence between what they’re saying and what they think, but there’s a correspondence between what they’re saying and some independently verifiable reality.”
“Of course someone can be truthful, meaning they’re saying what they believe, but what they believe is misinformed, because they did sensemaking poorly. So they’re propagating information honestly, but it is not true.” Daniel stresses, “So we need to look at distortions in both of these.” We need to look at distortions in both “truthful” and “true.”
True, Truthful and Representative
“There’s a third thing, which is representative, which is: it’s possible for someone to be truthful, to share exactly what they think is going on. And what they’re sharing is actually true, they’ve actually done good epistemology and empirically validated that what they’re saying maps to the reality in some clear way. And yet the interpretation I get from that will still actually mislead me, because… the true information is not representative of the entire context well.”
“Articles published in famous journals, peer-reviewed scientific journals… Five years later we’ll see that a major percentage of them, something like 50%, are found to be mostly inaccurate.”
“We see that the things that get studied, even if it’s true information, can be misleading… for the most part, where does the money to find the research come from? It’s gonna come, within capitalism, mostly where there’s some R.O.I. (return on investment) on the research, so some areas have more R.O.I. than others. So even a bunch of true information, but that is weighted towards certain parts of the information ecology over others, create misrepresentation through preponderance of information.”
Even a bunch of true information can create distortion.
Earnestness of Inquiry
“You can say the essence of science is no bias. At least the idea, or spirit of it. In capitalism it is about optimizing for bias. I actually have an agency that I’m trying to get ahead. I have intention to increase my balance sheet. And so, if there’s capital funding of science, it’s gonna fund the things that create R.O.I on that research, so we can keep doing more research.”
“That creates both the reason to distort the info and a reason to withhold information that is a source of competitive advantage, A reason to create disinfo to the other competitors, or at least wait.”
“Do I have some sense of what the actual territory is? And do I have a sense that the map that’s being created actually maps onto the territory reasonably well? Because Sensemaking means map generation, right? To be able to make choices about how we navigate. How we do choice-making in relationships with some actual territory.”
“So that’s ‘true,’ ‘truthful’ and representative’ and we can look at distortions for all three of those.
Strategic Signaling
“Look at why does an individual distort information. The most fundamental way of thinking about it is there’s this idea in terms of signaling, like if I’m just in nature watching what’s happening with rabbits and trees and birds, I’m getting information about them that they aren’t even intending to transmit. And so the information is just reflective. Right, light’s actually reflecting off them of the nature of reality.”
“As soon as there’s an agent that can share information strategically for an intention, then I don’t know if what they’re sharing is reflective of reality or reflective of what they think will advance their intention.”
“And that’s kind of the key distinction: is that the moment we get abstract signaling, which language allows us and the ability to kind of forecast, our ability to model each other. And your well-being and the basis of your agency doesn’t seem coupled with my well-being and the basis of my agency perfectly.”
Market Dynamics
“In the case of if I’m a marketer of a product and I want you to purchase it, whether my product is actually the best product or not, whether a competitor’s product is better, whether you need the product or not, I want you to think that you need it and to think that mine is the best. So there’s a breakdown between what seems to be in my well-being and what seems to be in your well-being.”
“So wherever there’s any misalignment in agency, and there’s the ability to share signal for strategic purposes, then you have a basis to have a signal that’s being shared that isn’t just truthful, right. So then we look at: well, where is that happening? It’s fucking everywhere, right. To really gross or subtle degrees. And sometimes for dreaded purposes.”
“You got the prosaic purposes which are basically market-type purposes, which most of the dynamics in the world are market-type dynamics or influenced by market dynamics. In market-type dynamic I’m going to be sharing information and this is why sure.. buyer-beware.”
“Buyer-beware isn’t just you know check to make sure the car is broken down, it’s also check to make sure the information being shared is true. Because I’m actually sharing information as a service, right, and you’re purchasing that information, whether you’re being monetized through an ad, or you’re paying for it directly. There’s nothing saying that I’m sharing true and truthful information.”
Control of Choice-making
“In market-type dynamics the goal of marketing as a company, and from the supply-side of supply and demand dynamics, the goal is to compel the purchaser’s action in a particular way. Which means as a company I want to do sensemaking for you. Because I want to control your choice-making. I at least want to influence your choice-making.”
“I’m not actually interested in your sovereignty and I’m not even that interested in your quality of life. I’m interested in you thinking that I’m interested in your quality of life. I’m interested in you believing that my stuff will affect your quality of life. But whether that actually corresponds or not, I don’t care. In fact, if I can sell you food that is very addictive, or cigarettes, or social media, or media, or porn, or whatever it is — that actually decreases your baseline happiness, but then makes you need another hit faster and is addictive, that’s really good for lifetime revenue of a customer.”
“And to the degree that my fiduciary responsibility is to maximize profitability for me and my shareholders, and so I need to maximize lifetime revenue of my customers multiplied by maximizing the customer base, then addiction is the most profitable thing I can get. Where that’s never the best interest of the customer. Now, as a corporation where I get to employ a whole bunch of people to do market research and to split tests…. To be thinking about your choice-making more than you’re thinking about your choice-making.”
An Unknowing Agent of Asymmetric Info Warfare
“Not only is the information I’m sharing with you not just truthful, and it’s a form of a kind of narrative warfare. We’re agents that are actually competing for what you do. I want to do something in particular and you want to do something that’s best for you — those aren’t the same thing. But it’s actually an asymmetric warfare, because I have a lot more ongoing team and focus. Especially if you start to look at a big corporation empowered by A.I. and data, etc. It’s radically asymmetric info warfare and you don’t even know it’s happening. You don’t even know you’re engaged in it.”
“This is ubiquitous, right? As we’re exploring reasons that people share things that are not fully truthful and representative, there are of course things worse than this. This is what I’d say is most of where distortion comes from — is agency misalignment. Well, it’s always agency misalignment, right? Most people call that market.”
“The same way we externalize cost to the environment, we can externalize cost to the information environment. And externalized cost to the information environment is like disinformation is pollution to the information ecology. Right, that’s kind of a good way of thinking about it. And as ubiquitous as pollution is, where we see the snow on the top of Mount Everest is full of pollution, right, of many different kinds, I would say information ecology pollution is more ubiquitous. Because it’s not just big industrial players doing it, it’s everybody doing it, and you can’t even see it clearly.”
Not Always As Grim As It Sounds
“But even people will create distortions and information for seemingly positive reasons. Like, first there’s kind of innocuous reasons like, okay I’m gonna write a testimonial or an endorsement for my friends book, because they’re my friend. Even though I think there’s stuff wrong in their book, it wouldn’t be that gracious of me to say that.”
“And maybe there’s some game theoretic stuff in there like, they wrote a testimonial for my book and I want them to keep doing that and so my giving the endorsement of my whatever credibility that other people proxy their sensemaking to me. I’m now, you know, proxying that credibility over here is not necessarily true. Even if it’s not that they did the endorsement on my book and I’m just supportive of them taking a positive step that doesn’t necessarily mean that anyone else who sees that I offered that testimonial and he’s using that as a method of their own sensemaking knows why I did it.”
“Here’s the other thing, as a decoupling of the signal that I’m sharing with the intention that I’m sharing it for, and so I might be sharing some thing with you and I have four or five complex intentions I might not share any of them with you or maybe I shared one.”
Attention Hijack
“When you’re getting information from a news channel, and you’re like, oh this news channel wants to maximize my time on site. And it can do that through appealing to my cognitive biases and my emotional biases and my identity biases”.
“It can do that through things that are inflammatory. It can do that through all kinds of things that are hyper normal stimulus and that hijack my attention. This is where it’s competing for my attention against where I would want to put my attention, because it’s monetizing my attention. Right?”
“So I have to factor the agency, the intention of the news station and try to remove that artifact from the information to try and infer what the true information might be. Basically to infer what the source of distortion might be. The same with the political candidate, the same with science that’s coming forward and I’m looking at: “okay, so who is seeking more grant funding, and what is easiest to fund?”
“And where are their standard model biases where only the things people are going to share the shit that’s gonna get them more funding and that’s gonna get them more tenure. Where they have to defend the thing that got them the Nobel Prize, even if it may not be true anyone, for ego and identity biases. Gotta factor all of those kinds of sources of possible bias.”
Recognizing the Strategy behind Signals
“And so this is the first kind of valuable thing when you’re trying to do sensemaking: is to recognize that the signal that you’re getting everywhere is mostly strategic. Strategic on the part of (which is just another way to say intentional) on the part of the agent sharing it, for their purposes not yours.”
“And where there is a disalignment between your well-being and theirs, or at least an apparent one. Then what their basis for intention might actually suck for you. And even if there seems to be alignment, you still don’t want lied to for your good. You still want true information.”
“So one of the first things we want to do when we do sensemaking is to look at why is anyone sharing what they’re sharing. And not assuming that they are being truthful.”
True vs truthful
“So basically, truthful is about game theory. Right? Truthful is about the fact that people are lying all the fucking time.”
“When you’re playing poker, you learn how to bluff. Because it’s not who has the best hand that wins. It’s who makes everyone else think that they have the best hand. And right, there’s a lot that goes on than that. Because it’s a zero sum game, that if my win does not equal your win, my win is going to equal some other players losses. Then I have an incentive to disinform you — where information about reality is a source of competitive advantage.”
“This is kind of the key way to think about it. Because disinformation even happens in nature with other animals. You’ll see a caterpillar that evolved to have something on its tail to disinform birds, so that they go to peck at the false head and it might still be able to live, right. That’s actually an involved disinformation strategy.”
“It’s just that the disinformation in nature happens very slowly and where the selective pressures on the side of the caterpillar and the bird are co-evolving. Right, so the bird is getting better at noticing those things as the caterpillar is getting better at dealing with that camouflage as a kind of disinformation. It’s an attempt to not signal something fully, because there’s rival risk dynamics between the caterpillar and the bird in that scenario.”
“But with people, with our abstract replicators, we can create the distortion much much much faster. We can have asymmetries in the capacity to create the distortion. And even have exponential asymmetries and so it’s actually really quite different.”
“So you think about the poker bluff, and you think about like even in soccer or football when someone fakes left and then goes right — that’s a disinformation strategy. Where if we’re competing in information about the nature of reality — where the water is?, where the gold is?, what the market is going to do next? If this company is going to make it?, whatever — equals a source of advantage but we’re an assumed rival risk dynamic (we’re competing for the same some, competing for the same attention, whatever) then first, I have the incentive to withhold information.”
Multiple Ways to Fuck Up the Information Ecology
“So I don’t want to tell you where the gold is, or I don’t want you to know the intellectual property that I’m going to monetize. Simply the withholding of information fucks up the information ecology so much. Because I’m doing cancer research and I’ve had some big breakthroughs but I’m not trying to share that with everyone else who’s doing cancer research because this is being funded by a for-profit process that needs to be able to monetize that intellectual property, right.”
“And so we can how much problem happens as a result of withholding information. But we can also see how intractable this problem seems within a game theoretic environment like capitalism. I keep saying capitalism — I’m not going to say that any other bad economic system we’ve ever tried is the answer, because they aren’t. We have basis for disinformation in communism and socialism and fascism. We’re gonna suggest that new structures that have not ever happened are needed. I’m just wanting to say here so that people don’t attach to me criticizing capitalism is probably going to suggest something that doesn’t work.”
“So the first thing is withholding information, and we see in business how much focus is on IP and NDA’s and, you know, those types of things. But then it’s not simply holding information, it’s also disinforming, right. Just like poker bluff, or the fake left and go right. (Mentions how this is also used in military).”
“We’ve had a basis for disinformation for a long time. We’ve had rival risk dynamics for a long time. The rival risk dynamics are a basis by which we can get ahead by war and killing somebody else or lying to them, right. Or ruining the commons. It’s just exponential tech leads to those same incentives leads to exponential disinformation.”
Grasping an Exponential Complexity
“Exponential extraction, exponential pollution, exponentially scaled warfare — and on a finite playing filed that self-destructs. So the underlying cause is the same stuff that’s been happening, but at a speed and scope and scale and level of complexity that forces us to have to actually deal with the underlying structures now because they can’t continue.”
“That’s the game theoretic side of it. Now what would it take to have an intact information ecology where any information that any one had, just on the truthfulness aide, was being shared that there was no incentive for disinformation. First let’s just imagine that, right, like no disinformation.”
“Let’s give some other examples of disinformation. There’s not just where I’m intentionally trying to mislead you, there’s also where I’m sharing signal for some purpose for me that might mislead you and I’m not intending to, I just don’t care if I do. So let’s say there, I want some increased attention and this might be because I’m going to monetize that attention. It might be because I’m going to get political power. Might be simply because I want attention, right.”
“So let’s say I comment on what some famous person is doing. Let’s say I disagree with them. I’m instantly going to get some attention if I critique them effectively, that I didn’t necessarily earn and I don’t even have to believe the critique because via association of that type, I’m going to get some attention. So now people have a basis to focus on something they weren’t focused on before, to criticize it because that will get attention or to compliment it, or to play off of it in a way that is not actually what they care about or believe. And again you look at how ubiquitous it is. You know, that kind of phenomenon.”
A Post Game-theoretic World
“So the answer to getting over the truthfulness issue is actually post game theoretic world. Which is the same answer as, like, how do we get past warfare? Well, it’s not just kinetic warfare where you throw bombs or rocks at each other. It’s also info warfare and narrative warfare and economic warfare, right. Which is basically any in-group that is coordinating to complete against an out-group in some kind of zero-sum dynamic.”
“And that’s companies to companies, it’s companies to people, it’s people to people, it’s countries to countries. It’s global economic trading blocks with each other. It’s all of those things.”
“What can people do right now within a game theoretic world to start to create spaces of truthfulness? Start to create relationships where one of the highest values is truthfulness with other people that are capable of and want and are committed to that. Where people are not only not lying to each other but they are endeavoring to not withhold information.”
“Which is tremendous intimacy and tremendous vulnerability. And see if you can create enough psychological safety with some people to be able to start exploring what does it mean to actually share information honestly. So that we can have that, and all kind of make sense together. That’s one thing.”
“And there’s also something where it’s like, if you don’t throw trash out the window of your car because you don’t pollute the environment, be careful about not polluting the information ecology. By rationalizing why your own mis- or disinformation is okay. And just start think of it that way. Think of any time you’re sharing little lies as polluting the information ecology. And being like, “oh wow, I don’t want to do that. I don’t want to be part of the fucked up information ecology.”
Epistemology and the Shared Reality
“Now on the true side, which is not just a mapping or a correspondence between what I’m saying and what I think, but between what I’m saying and what shared reality is. Which means there has to be a correspondence between what I think in reality that means I had to do sense making well before share something, right. So this is the topic of epistemology.”
“So, one is movement past game theory. The next is epistemology — how do we know stuff. So even if nobody was lying and withholding information, the complexity of the world makes epistemology hard. And most people aren’t even endeavoring at it.”
“So I have: if no one was lying and I could take all the information as, at least, truthful there would be certain epistemic processes that I could apply that I can’t apply if I can’t even take the sources of signal as being signal without a lot of noise, right. So there’s epistemology that I have to have within the context of an environment that has a lot of disinformation. How do I make sense of what is true and what isn’t true about signal coming in, and then how do I parse from lots of signal what might be true about reality.”
“To just get a sense of why like how big a deal this is, you take any of the biggest issues in the world, like the issues that could determine whether or not we keep existing as a species. Okay, so take big environmental issues like climate change, there’s disagreement as to whether climate change is really even a thing. And to an extent that it is a thing, what the causes are an what time scales are.”
Fervent Ideas and Narrative Warfare
“Now most people who believe fervently, “no, climate change is real. 97% of climate scientists agree, it’s anthropogenic greenhouse gasses” etc. Most of the people that believe that fervently enough tokens of like go into narrative warfare over it, have never actually looked at the primary data deeply themselves. And yet there’s an almost religious fervor around it that was based on having proxied their sense making to people who they believe. So the UN’s head or the Gates foundation said it or whatever it is, I’ve heard it repeated enough times. Just through repeatability I have been programmed to believe this thing is true. Which is not that different than believing a fundamentalist religious ideal, right.”
“Let’s say we take people’s fervent ideas on vaccines. Or their fervent ideas on the viability of market ideology, or almost anything like that, almost no one who has fervent ideas has a good epistemic basis for the level of certainly they hold. There’s a decoupling between how much certainty they have and hoe much certainly they should have through right process.”
“And then you look at who are they proxying their sensemaking, and most of the time they’re not even proxying their sense making to the people who did the original research. Many of whom disagree with each other and were funded by someone to say something that in not fully true in the first place. And who were maybe employing epistemic biases themselves.”
“But typically, it’s somebody else who looked at all of this, and then someone else who looked at all of that. So you might have a bunch of climate scientists into someone who is speaking about that as a climate scientist at a more synthetic level (like a James Hansen or whatever). Then like a Gore or someone who is actually speaking to the public who we’re proxying our sensemaking to, and we say, okay how many steps removed is it and how good is the original data.”
“And so if we think about okay how much radiation actually was released into the environment from Fukushima. It seems like a very straighforward thing. Take a Geiger counter and go out and do the studies. But how many people are equipped to take a geiger counter out and go do that. Or to be able to actually pay attention to how the flow dynamics in the air and the water are going to work, or you know, so many things.”
So, we have to take other people’s data to begin with, and those other people, let’s say the data was the Japanese government or TEPCO or whoever it was, or a conspiracy theory group that is saying no no, it’s actually released in huge amounts of ocean and all the fish are totally toxified. But they might just be, they have a basis to disinform because they’re getting viewership that they’re monetizing through that. What? How the fuck do we sense of it, right?”
“So, what we start to get is, it’s like… is AI gonna solve a bunch of problems and be relatively safe or is AI the biggest risk and going to kill us. And you see the kind of fervent disagreement, but you see a still pedal-to-the-metal going as fast forward as we can with AI. And with CRISPR biotech, and with every type of exponential technology that could be catastrophic.”
Impaired Agency from Fractal Disinformation
“There’s increasing speed of choice making with decreasing sensemaking. And to just think about like okay, or even what’s really going on with the Chinese government and its cyber warfare relationship with the US government. And with kind of the what its actual capacities are and what its intent and agency is, and those types of issues. Well, we know that’s going to be obscured. We know both sides, and all kinds of sides are going to be obscuring information.”
“And it gets even worse, because it’s not just that you’ve got this group of people called China, and this group of people called the US and that they’re in a game theoretic relationship with each other. But everybody on Team USA cooperates perfectly. You know, of course it’s not like that. (Mentions the intelligence agencies and how they might be withholding information or sharing disinformation).”
“You have fractal disinformation, right, at almost every place because of a game theoretic incentive system which is the balance sheet of countries, the balance sheet of organizations, the balance sheet to all the way down to individual people, right. This separation of agency.”
“Now I’m back to the game theory truthfulness side, but I have to factor that when I’m trying to make sense of things because I have to be able to parse signal from noise to then be able to synthesize the signal. But then even if that wasn’t the case I’m just trying to do epistemology on good signal. And I have to say ok, in a complex like where we can’t even forecast the weather 10 days out very well, how do I forecast the effect of putting certain kinds of pesticides or GMO’s or whatever into the environment, right?”
The Metrics for Epistemology
“It’s a complex system that we can’t forecast very well at all. We don’t know the tiniest bit of the actual information of how that complex system is going to regulate, but we’re going to do stuff that affects those systems at scale. What is the right epistemology to be able to make sense of is this is a good choice.”
“So you can see, what are all the metrics we have to factor? Let’s say we’re talking about biotech. Ok, so I can give you a drug that is good for some biometric that happens to be associated with a disease that I’m taking. I’m trying to get the doctors to use this drug allowed by the FDA to treat a particular disease Since the disease is identified by this biomarker, I have to affect this biomarker.”
“So let’s say I’m talking about high cholesterol and so we develop this statin for it. How many other metrics are this statin effecting? Well, every day we’re learning new biometrics we didn’t even know existed. How many of those are being affected? They’re part of the unknown-unknown set that we don’t even know to be able to do risk calculation on.”
“Now, we could say well let’s run the experiment long enough before we release the drug to see if it affects total longevity and all cause mortality. Well, nobody fucking down that. Nobody’s gonna run hundred year experiments on something before they release the thing. They’re gonna run the shortest ones they can. So where you have a system that has delayed causation, how do I know if that’s creating problems way down the road? Well, it does all the time, right?”
“The questions become: What would the right epistemology be? How many metrics do I have to factor? How do I know how to factor those metrics? What is the total information complexity of the scenario relative to how much I have actually, the information complexity of the assessment that we’ve done?”
“So we can get into at some depth the topic of appropriate epistemology for various contexts. But the first thing I can say is that if people aren’t even thinking about that, their chance of making sense well is pretty close to zero.”
“How Bad Ideas Defend Themselves”
“If we think about the concept of a meme. The way that Dawkins originally put it forward. It’s an abstract pattern replicator where a gene is an instantiated pattern replicator. Which means that it can mutate and change and affect behavior and propagate much much much faster. right. And we can kind of say that in homo sapiens our genetics selected for memetics, for higher-order memetics.”
“Our genetics because most other species, what the selective pressures had them be adaptive to an environment. There’s mutation, and then there’s the mutations that survive and make the best are the ones that make it through. But that’s within the context of surviving that environment and are able to mate successfully in that environment. So they become more and more fit to their environment.)
“Because of out abstraction capacity, which is both our capacity for language and memes as well as tools, we were able to go and become adaptive, become actually apex predators in the Savanna and in the Arctic and in the ocean and everywhere, right? We were able to go to every environment.”
“Which means as soon as our population would normally if we were any other animals start to level off in relationship with the environmental carrying capacity of environment we just moved. We were able to decimate that environment and move to the next one, right, into all of them.”
“Then but that part isn’t the part that I want to get into right now. It’s that since we were going to be adaptive to totally new environments and since we were going to create tools or what it was to be adaptive was changing since we modify our environment in ways the other animals don’t, we can’t come in and genetically fit to a specific environment. We have to come in and be able to print the environment that we’re in, so we know how to be fit to totally different environments.”
“Because like it’s not that adaptive for us to throw spears or even climb trees All that well. But it is to be able to text and drive do you like text and drive and stuff that wouldn’t I’ve been adaptive a thousand years ago at all. And so this is why human babies are embryonic or neotenous for so long, right, compared to any other animal. (Makes a comparison from humans to horses based on the difference of how long it takes to walk).”
There is nothing like us in terms of the extended Helplessness and that’s because we don’t have inherited knowledge of how to be us since the environment is going to be different. We have to imprint the environment that we’re in to be able to be adaptive to environments that we’re changing, right.
“So this is a saying that our genetics selected for neuroplasticity, selected for memetics. Our hardware selected for faster software updates that could have faster changes in adaptive basis, so that we could move our environments and all those types of things. So if we think about a meme kind of like a gene, as a pattern replicator, as an abstract pattern replicator that can mutate much quicker. There’s also a big difference that the other animals are in an environment that the mutation across genes is very evenly distributed.”
Adaptiveness, Evolution and the Selection Process
“Mutations happening to the gazelles and to the cheetahs at an equal rate, right. And there’s co-selective pressures on both of them so they’re both getting faster, the slowest ones of each or dying off, and so there’s this kind of symmetry of power that has the competitive pressures between them have them all up level.”
“But when we start being mostly memetic and the other species are still mostly genetic — meaning we are largely based, we’re getting adaptive based on abstract pattern replicators, they’re still instantiated replicators, we can increase our capacity much faster than they can increase their — than the environment can increase its resilience to our capacity. Which means that we can debase the whole substrate that we depend upon which is self terminating, right. You can’t keep debasing that which you depend upon.”
“So in evolution there is a selection process for the genes that make it through, right. But there is this kind of symmetry of the genes that make it through because of the evenness of mutation. And because of that co-selective pressures. Now with memetics the Memes that make it through are the memes that win in a rival risk context, not the ones that necessarily represent the true or the good or the beautiful.
“So the propagative memes propagate more than the true memes propagate. This is a super important concept to understand. Which is like… I was always dumbfounded thinking about the evolution of religions. Take Christianity for instance. (Like the story of Jesus and Mary Magdalene — “He who has no sins may you cast the first stone”)”
“He’s bringing forgiveness to Judaism and in his name we did the crusades and the Inquisition. And said, we will not just kill but torture anyone who doesn’t except the Lord of peace as their… it’s like how the fuck did we do that. How did we do the mental gymnastics to take the guy who’s key teachings were forgiveness and torture people in the name of that. Well we figured out how to do it, right. But the key is that we were super adaptive.”
— —
Here’s the full video of the War on Sensemaking on Rebel Wisdom’s channel: