Self-organizing systems & incentive landscapes
Recognizing self-organizing systems and applying them to various wicked problems like social media
Part II in “What's missing in our approach to the wicked potentials of our era?” series.
Self-organizing systems & incentive landscapes
Once I stop looking for villains—a specific bad actor that generates a problem—I often start to see self-organizing systems as causal. Causes are where change happens.
A classic example of a “self-organizing system” is water going down a drain in the tub, creating a whirlpool.12 As long as there’s water to feed it, the whirlpool continues, even though no particular H2O molecules remain in the process. Similarly, the perverse incentive landscape of social media continues even when individual influencers step out of the drain, certain companies go out of business, or new companies (TikTok) come into play.
The unique relational interplay of each element together makes the whirlpool a “thing;” this is the kind of “meta-thing” I want to evoke when I say “incentive landscape.” The meta-ness of it can be difficult to see, because people will say “The whirlpool is created by water!” and they’re right. Others will say, “no, it’s gravity!” and they’re also right. But they’re barely right if they’re not looking at the systems level, seeing the interactions. And they’re also wrong, if they claim exclusivity of one cause.
Social Media (and therefore Collective Sense-making) as self-organizing system
The metaphor somewhat breaks when we look at interventions. With whirlpools you can remove just one factor and the system stops, but with many of our wicked problem/potentials, the pieces all reinforce each other, making “removing” one of them almost impossible without addressing all of them, together.
When it comes to social media for example, all of these factors (and more) reinforce each other:
Financial Incentives
A profit model that makes customers “the product” — our data is sold to advertisers
VC funding models demanding exponential user growth, short term stock exchange, etc.
Measurement and Optimization (Technical implementation)
Engagement metrics as primary performance indicators for creators and platforms
Algorithms that optimize engagement and staying on site (over well-being)
Psychological and Cultural
Human psychology: Negativity bias, variable reward structures (slot machine), “Keeping up with the Joneses” when the Joneses are 8 billion people
Cultural shifts, eg: from from “one right way” to “multiple truths simultaneously”
Structural
Network effects creating winner-take-all platform dominance
Social graph lock-in & data portability barriers
Therefore we mostly say “incentive landscape” because it shows how this particular self-organizing system is locked in place by mutually reinforcing incentives.
Changing the relationships between things means honoring the benefit of the current systems
If it’s the relationships between things that make a self-organizing a thing, then the key to a better system is designing different relationships, not exchanging the parts.
I think this is why BlueSky, Community Notes, and many others are admirable movement in the right direction that won’t get the job done. They don’t fundamentally change the incentive structure.
The change the relationships, we have to understand what the old relationships are providing, and find new ways to provide that, or something better.
I see an analogy in personal growth3 changing old habits works best when we see that we’re getting something we value from our current struggle.
For example someone might think “I wish I had more money” but without realizing it, they’re committed to the idea that more money = less ethical. I don’t think they’re right, but in their worldview, they’d rather be ethical than monied, so they conveniently find ways to never quite have more money. If they really want more money, they have to let go of the false association (or be willing to be less ethical).
It’s worth breaking this analogy down to make it more clear, because applying this understanding to our whole-system design is a crucial part of why UpTrust thinks it can succeed in our crazy ambitious mission where so many other attempts have failed:
In personal growth: Person wants X (more money) but has unconscious commitment to Y (being ethical) and false belief that X and Y are incompatible. So they self-sabotage getting X to maintain Y.
In social media: We want X (better connection online) but there are things we value about the current system Y (ego-gratification via money, power, influence, and entertainment; and belonging) and we may have false beliefs about whether X and Y can coexist.
The structure is:
Surface desire (more money / better connection online via better social media)
Unconscious “competing” commitment to something valuable (ethics / current benefits like making money from content, finding belonging, etc.)
(A) False belief that you can't have both or (B) outdated belief that the unconscious commitment is more valuable
Solution: either provide the valuable thing in a new way, or challenge the false belief, or update value-structure (or occasionally surface a deeper value)
If the analogy holds, for Social media:
Surface: better platforms
Competing commitment: creator income, belonging, news access
Belief issue: "current platforms are only way to get these benefits" (false) OR "these benefits matter more than systemic harm" (potentially outdated)
Solutions: provide benefits in reformed systems, challenge necessity beliefs, OR update values to weight collective wellbeing higher
…to change social media we have to see what’s working about the current social media incentive structure, and either provide something equivalent, better or change the worldview about what’s true (deconstruct false associations). Here are some things people get from the current system, that have to be included, improved or transformed:
Social media example:
Surface: better platforms
Competing commitments: belonging and connection, meaning generation in the face of pluralistic overload (ironically), investor returns, ad income, creator income
Belief issue: "current platforms are only way to get these benefits" (false) OR "these benefits matter more than systemic harm" (potentially outdated)
Solutions: provide benefits in reformed systems, challenge necessity beliefs, OR update values to weight collective wellbeing higher
UpTrust gets to take advantage of all the solutions—providing better connections, better meaning (especially in the most controversial and confusing conversations of our current age, thanks to algorithmic innovations), better conversions for ads (because they come from a trusted source) and a better experience for creators (because they don’t feel they have to sacrifice their ethics for income). We get to challenge the idea that humans default to our base nature (which Whole Foods, Chipotle / Sweet Greens, Patagonia, and the popularity of places other than Las Vegas have done before us), or that our desire for attention and competition can’t be prosocial (people compete for trust scores, incentivizing trusted content). And we get to tap into a network of humans who already value collective wellbeing but have de-platformed themselves.
Part II in “What's missing in our approach to the wicked potentials of our era?” series, which attempts to uncover some of (un)common ways of seeing that might help us transform the most tricky global problems into something that sets our great grandchildren up for living in a better world than the one we’re in.
Part 1: No villains (this post)
Part II: Self-organizing systems & incentive landscapes
Part III: Integration of vision and practical in every decision
Part IV: Addressing the whole developmental stack (what’s good for me, we, all of us)
Part V: Culture addressing the problem must embody rare integral values
Part VI: Multiple stakeholder integration
So many examples: From the overuse of antibiotics in livestock, to the QWERTY keyboard layout, medical school hazing to perfect produce, we can see endless examples in a variety of sectors of reinforcement loops that make systems resistant to change.
Eliezer Yudkowsky has a bunch of examples of this in his book Inadequate Equilibria including widely prescribed SSRIs despite modest benefits and good alternatives (like light therapy for SAD), the replication crisis (the credit/reward structure in science (journals, citations, tenure committees), etc. Sometimes everyone sees the shortcomings, but no single actor can fix them unilaterally. Other times certain people are disproportionally benefited (like law school costs or regulatory capture) and actively maintain systems that are negative for civilization.
Most developmental theorists hold that seeing self-organizing systems is a natural capacity that comes online transitioning from “Green” to “Teal” / self-questioning to self-actualizing / 4.0 to 4.5; a subtler point that I don’t hear as often but would guess they agree on is that earlier stages can learn to to see self-organizing systems without having a developmental shift when its pointed out.
We teach the essence of this in Relateful Coaching Training; it shows up in Philip Golabuk’s Philosophical Counseling, and I believe it shows up in Robert Kegan’s Immunity to Change, Existential Kink, and Connection Theory of Leverage Research although I haven’t dived into these deeply.