Jason Lee Byas has already articulated what I think is an interesting (and effective) challenge to Max Stirner’s critique of morality, as well as come to a conclusion that I very much endorse: namely, that self-interest and caring for the well-being of others need not be separate concerns.
In my view, the most important contribution that Byas has made in this discussion (and Chris Matthew Sciabarra has made a similar contribution) is to take, as given, the metaphysical groundwork that amoralists utilize. That is to say, Byas and Sciabarra assume that the “self” in “self-interest” is cleanly delimitated from other selves: the “self” that is “me” and the “self” that is, say, “Wayne Gretzky,” are separate entities leading separate existences. Under this assumption, should Wayne Gretzky and I ever interact, we remain two separate entities; clear boundaries remain between the two of us. As I said, showing how, even if we understand “selves” in this fashion, we can still marry self-interest and morality is important, because many immoralists take this assumption to automatically entail that they can disregard the interests of others.
I’d like to take a slightly different route and place questions of morality and self-interest in the context of complexity and complex adaptive systems (CAS). I’d rather emphasize the inherent interdependence between nodes in complex adaptive systems and note that we can reach a similar conclusion to Byas and Sciabarra: where they see self-interest being embedded within morality, CAS helps us see how morality (taken here to mean a general concern for the interests of others) is embedded within self-interest. And not just in a way where the only “acceptable” morality is one in which whatever furthers your own self-interest is “good”, such as in the traditional interpretation of Objectivism; CAS helps us see how behaviour that is traditionally labeled as “altruistic” is nonetheless behaviour that is in your self-interest to follow.
We can reach this conclusion by realizing that in CAS, one “part” (roughly analogous, here, to an individual “self”) is inescapably tied to the actions and reactions of other “parts” of the system. Thus, unlike in some thought experiments where the “self” is rather isolated from others (a “Robinson Crusoe” style situation), in a CAS, it’s unavoidable that the self will be impacted, however indirectly, by the actions of other “selves” — and vice-versa. The question that we could then ask is: how should a rational person act given this fact? We can even stack the cards against morality and say that, by rationality, we mean only a thin “merely-instrumental” variety of rationality, where all that matters is consistency between our intentions and our goals.1
Given the nature of complexity, I think we’ll find that even a thinly rational person ought to be acting in the way Byas and Sciabarra delineate. That is to say that, under conditions of complexity, a thinly rational person will act towards other people’s interests the way an anarchist would act towards other people’s interests.
The Nature of Complex Adaptive Systems
It’s widely accepted that humans are embedded within CAS, to say nothing of being a CAS in of ourselves. Multiple fields — biology, sociology, economics, political science, computer science, the list goes on — have assimilated lessons from complexity theory and applied them to their models of agent and institutional behaviour. This is to say that denying the fact that human beings are embedded in CA flies in the face of cutting-edge developments in all manner of fields that are concerned with understanding the nature of human beings and our societies.
With that established, there are three facets of CAS that are important for my argument. The first is emergence: that is to say, in complex adaptive systems, interactions between parts generate higher-order phenomena that the parts do not possess on their own. One example would be neurons and consciousness: so far as we know, individual neurons don’t themselves possess whatever consciousness is. It’s only when neurons interact in a particularly structured network that something like first-person self-awareness exists. The more individual parts interact with one another, the greater the variety of emergent properties exhibited by the system.
The second facet is that the adaptability of the system depends on the level of self-organization present in the system. For numerous reasons related to information dynamics and the relationship between ideas and experimentation (which, again, go beyond the scope of this essay), a system that organizes itself without the need for external control is better able to adapt to changing conditions. Anyone familiar with Friedrich Hayek’s critique of central planning, and his emphasis on the dynamic nature of markets, will recognize how incompatible adaptability and external control really are.
The third facet is non-linearity. What non-linearity means in this context is that any interactions between parts of a system, no matter how small, can have significant effects on the system as a whole. For example, it doesn’t immediately follow that, in a complex system, the largest “parts” — like, say, the largest concentrations of power in a system — will produce the largest changes in output whenever they act. Relatively “minor” agents in a system may end up affecting the largest amount of change through their interactions simply because each agent magnifies the intensity of the change as it cascades through the system.
A Thinly Rational Self-Interested Agent meets Complexity
Now enter our thinly rational self-interested agent. If they’re truly self-interested, and they’re exercising only a thin “instrumental” form of rationality, then — so the argument goes — they’ll only take into consideration information that helps them remain consistent in their intentions and goals and discard the rest as unnecessary. Now, there are numerous — numerous — problems with this conception of rationality, as the likes of Jon Elster and Edward F. McClennen have pointed out — but we’ll assume there aren’t any issues with it just for this argument.
If the output of interactions between parts was linear, if the adaptability of a system didn’t depend on its ability to self-organize, and if emergence wasn’t a dominant facet of human systems, then it’s, if nothing else, not obviously counter-productive for you to disregard the interests of others. You’ll very likely miss out on the benefits of self-improvement, since you’ll be too busy bulldozing over other perspectives to give them any consideration. But, at first glance, it doesn’t appear as though you’re actively working against the accomplishment of your goals by doing so.
Being embedded within a CAS, however, changes everything; and any agent that ignores the nature of the system they’re in is acting against their own self-interest (and, thus, is acting irrationally — even under a thin, “instrumental” definition of it).
For one thing, the importance of self-organization means that domination, hierarchy, and initiatory force are all deeply counter-productive. The more you try to control the way people think or act or interact, whether directly through force or via more indirect measures (threats, fear, or “nudging” through some other form of incentive-manipulation), the less adaptable the system will be. This is problematic just on the level of basic survival, as rigid systems will be far more vulnerable to shocks and external threats (however you might define “threat”) than highly adaptable systems.
But beyond concerns over survivability, the nature of CAS means that any agent that seeks to create a highly rigid social system will be actively restricting their ability to accomplish their goals — even if they place themselves at the top of a dominance hierarchy. Adaptive systems are better at processing information, learning from past actions, and changing their behaviour to reflect what they’ve learned. Adaptability, experimentation, and learning go hand-in-hand, in other words; and an environment that encourages self-organization, rather than restricting it, will provide all agents within that system with more options, tools, and resources to accomplish their goals.
It’s in a rational agent’s own self-interest, in other words, to encourage self-organization, and push back against attempts to restrict self-organization. Just as Byas said in his last article, “The Authority of Yourself,”
[I]t is in my own interest that I refuse to deploy aggression or domination against you, and it is in your own interest that you refuse to deploy aggression or domination against me.
It’s rational, in other words, to care enough about other people to respect them as autonomous agents.
A similar thing happens when we consider emergence: a thinly rational self-interested agent will want to have as large a variety of emergent institutions to choose from, because maximizing choice is the most effective general-principle available to ensure that an agent is prepared for any possible future. Supporting emergent institutions that further the process of emergence naturally follows from this, but given that institutions only exist because of the interactions of a more fundamental subsystem (i.e., interactions between human beings, in our case), what also follows is that it’s in the interests of a thinly rational agent to encourage rich and diverse interactions amongst all possible agents. This requires not only a robust form of negative freedom (i.e., no external authority controls who you interact with or under what conditions), but also a robust form of positive freedom — i.e., you’ll want to ensure that as many individuals from as many different backgrounds as possible have the capacity to engage in interactions with other agents. If they lack the material or symbolic resources to participate in these interactions, then the number of possible emergent institutions decreases; and with that decrease, the number of choices open to individual agents also decreases.
Yet again, it’s in the self-interest of a thinly rational agent to care enough about others to respect them as autonomous agents, but with an even stronger condition included: it’s in your rational self-interest to care enough about individuals to help them participate in rich interactions with others. Much like Peter Kropotkin argued centuries ago, mutual aid can be readily defended on “selfish” grounds.
Finally, non-linearity strikes a deathblow against the notion that a society is best served by catering to the already rich and powerful — that is to say, it has deeply anti-hierarchical implications. If any individual can have a significant impact on the output of a CAS, then it’s in the best interests of any individual that wishes to profit from said output to encourage an egalitarian distribution of resources, or at least something that closely approximates an egalitarian distribution of resources. Indeed, as the dynamics of wealth distribution shift, it would be in the rational self-interest of all agents involved to periodically redistribute wealth in order to encourage non-linear effects from all parts of the system. Much ink has been spilt arguing that inequalities in wealth can be justified on the basis of incentives — that is to say, that industrious behaviour can be induced amongst the poor through inequality, so it’s better for everyone in the long-run if we make being poor a type of punishment. CAS, though, suggests the opposite: that we’re better off unconditionally spreading the wealth around, because even the smallest or seemingly most inconsequential actions of an individual can kick-start a transformation of the system as a whole.
And so on. I could go on at great length discussing all the different facets of CAS and how they impact self-interest. What’s important is that the above discussion proceeds directly from a thinly rational self-interested agent encountering a CAS. Under conditions of complexity, egoism and morality — one that is against domination, hierarchy, inequality, and initiatory force — begins to blur.
It’s a morality that looks awfully similar to the kind espoused by anarchists.
Complex Rational Agents
The above discussion dealt only with a thinly rational self-interested agent encountering a CAS. But it was also noted that individuals themselves are CAS; our brains are complex adaptive systems, our nervous system is a complex adaptive system, and even our cells are complex adaptive systems.
Does this impact how a thinly rational self-interested agent ought to act? I think it does, though fully exploring why would require me to move past the conditions I’ve set for myself for this discussion.
The previous section’s discussion took, as a given, that individual agents have perfect knowledge of what their interests are and that these interests (or “preferences,” to use a more precise term) are static over time. But if an individual agent is expected to change based on their interactions with other agents or the external environment — be it through physical interactions or “merely” the exchange of information — then the assumption that preferences remain static over time seems fanciful. And note that it’s not a failure of rationality for someone’s preferences to be dynamic; if rigid social systems frequently fail because they’re unable to adapt or effectively problem-solve, the same principle holds for individual agents too.2
This is where it becomes difficult to maintain a “thin” definition of rationality, since the problems with that conception of rationality threaten to spiral completely out of control. That’s a discussion that’s well beyond the scope of this project.
But it also becomes difficult, in my view, to maintain the metaphysical assumption that underlies most of egoism: that a clear demarcation-line exists between “me” and “you,” as stated at the beginning of this essay. The fact that human brains — let alone human beings in general — are CAS not only implies that our preferences are bound to change over time, but that what are preferences actually are likely won’t be known until the moment we have to put “skin in the game.” And even then, the most introspective individual might be presented with multiple reasonable interpretations of why they acted the way they did. Again, this isn’t a failure of rationality; it’s inherent in the nature of CAS.
If individuals themselves don’t necessarily know what their own interests are a priori, and that’s partially because your own interests will be shaped by others as you interact with them, we start inviting questions about whether the notion of a “self” is in anyway coherent — if, in other words, it makes any sense to say that I’m a completely separate being from Wayne Gretzky.3 If the boundaries of the self are inherently fuzzy, then the very concept of “egoism” becomes incoherent. We’re in a situation where acting against the interests of others is identical to acting against your own interests; whilst, conversely, acting for the benefit of others is identical to acting in your own benefit.
Personally, I have no problem with this line of thought — indeed, I endorse it wholeheartedly. But it’s a line of thought that deserves its own essay; it’ll likely be one far longer than this essay, too.
The fact that human society is a CAS, though, drastically changes the relationship between morality and egoism. Altruistic benevolence is inextricably tied to self-interest — an altruistic benevolence that lines up closely with the kinds of ethics that anarchists frequently articulate and defend. A thinly rational self-interested individual will be motivated to follow the prescriptions of this kind of ethics because it’s in their self-interest to do so.
Endnotes:
[1] This is the rationality of homo economicus, and it’s also the rationality that, according to some thinkers, turns marketplaces into “morality free zones” (to quote David Gauthier).
[2] Frank Miroslav discusses this in his contribution to the previous Mutual Exchange. In competitions to see which algorithm better accomplished a specific goal, evolutionary algorithms that were optimized for novelty outperformed algorithms that were optimized for achieving that very specific goal.
[3] Note that as a Calgary Flames fan, I really don’t have any ulterior motives for wanting to be some hybrid entity with Wayne Gretzky. Jarome Iginla? Yes. But not Gretzky.