Hey all -
* Abstract thinking
If there’s one thing I repeat over and over again in this newsletter, it’s roughly: “everything is pattern-matching.”
We’re wired to take a bunch of information (e.g. sensory data, thoughts, emotions), identify patterns and make predictions about the future. The better we can predict the future, the better we can plan for it and ultimately - from an evolutionary perspective - improve our chances of survival and reproduction.
When we look at the physical world however, we don’t see patterns. Instead, we see individual instances of things, each of which is unique and exceptional in its own way.
For example, if you see a cat, that is a specific cat. It shares many features with other cats, such as its appearance, its gait, its behavior and so on. It also has many features which distinguish it from other cats, such as its name, its upbringing, its personality - all the way down to how many whiskers it has.
We wouldn’t say a cat is no longer a cat if it had a name like “Sam,” but we’d probably say it’s no longer a cat if it was 10-feet tall with fur made of stone. In other words, we’ve defined some features of cats as “essential” and others as “non-essential.” The set of essential features that all cats share in common - now that is the cat.[1]
The “essence” of the cat - that is, what makes a cat a cat - doesn’t exist in the physical world. It only exists in our imagination.
A cat therefore is the abstraction, concept or overarching pattern that ties all cat-looking and cat-behaving entities together. This abstraction is composed of constituent examples - your cat, someone else’s cat, a feral cat - and the set of all features these examples have in common is the “cat.”
When we are thinking abstractly then, we are really just pattern-matching. Abstract thinking is our ability to identify patterns and similar features between otherwise separate and unique entities.
When we deal with the intangible - like abstractions or concepts - we’re suddenly released from the constraints of the real world. Without ever touching a real-life cat, we can imagine 1,000 identical copies of the same cat, or a cat with two heads, or a cat without a tail.
In other words, abstractions offer us a simulated environment to run thought experiments without ever touching the real world. Here, we are free to bend the rules of reality - to wonder “what if?” The only raw material is our ability to imagine.
The more nested and high-level an abstraction becomes - with constituents and sub-constituents and sub-sub-constituents - the further we get from the real world. These highly abstracted ideas give us a lot of leverage - look how much we’re able to describe! - but they come at the cost of precision and accuracy in explaining the behavior of any underlying constituent. Democracy for example is an extremely high-leverage abstraction, but it has hardly any utility when predicting behavior between you and me.
Highly abstracted ideas also become harder to visualize and understand. By their very nature, they’re less tangible, less real. An “animal” for example is a very abstract concept: what do you visualize if I say “animal?” Maybe a specific animal - like a dog - or maybe nothing at all. Alone, there is no clear representation of an “animal.” When ideas become very abstract - such as financial models or sociological models - they become increasingly difficult to reason about or validate.
While abstract thinking can give us a lot of power - perhaps civilization as we know it owes everything to our ability to think abstractly - we always need to pause, reflect and understand how these abstractions tie back to the real world.
* Leaky abstractions
Abstractions are helpful because they allow us to summarize, explain and predict a lot of underlying behavior without ever delving into the complex details. However, abstractions don’t work 100% of the time and the ugly underlying details sometimes “leak through.” This is what Joel Spolsky called a leaky abstraction.
For example, we may have the concept of a “representative democracy,” where all citizens have the right to vote for elected officials who represent the interests of those citizens.
But sometimes this concept doesn’t work as well in practice as it does in theory. Not everyone can or does vote; elected officials may not represent the interests of their constituents; certain historical or political constraints (such as gerrymandering) may limit the degree to which voters can elect officials of their choosing. In other words, the abstraction has “leaked” and we have to dive into the details to see what’s actually going on.
If the abstraction virtually never works as it supposed to, we’d say it’s a very leaky abstraction. Edge cases and corner cases and exceptions abound. If the abstraction always holds true - gravity for example works just as it says on the tin - then we’d say it’s not a leaky abstraction at all.
We’re always looking for higher-leverage, higher-fidelity abstractions - those which explain a lot of underlying behavior without increasing the amount of exceptions - but leakiness is inescapable. As Joel wrote: “All non-trivial abstractions, to some degree, are leaky.”
* The high-ground maneuver
I noticed several years ago a little “hack” in argumentation and persuasion that I never could quite describe or label. It was a way to persuade people without actually being right. It was (and is) entirely a logical fallacy, but it worked. So I was pleased when Aaron referred me to a blog post which finally articulated the tactic. Scott Adams, author of the Dilbert comic and now somewhat controversial blogger, calls it “the high ground maneuver.” He writes:
The move involves taking an argument up to a level where you can say something that is absolutely true while changing the context at the same time. Once the move has been executed, the other participants will fear appearing small-minded if they drag the argument back to the detail level. It’s an instant game changer.
In other words, we shift the argument to a higher plane of abstraction - an abstraction that includes your specific examples as well as mine, which are presumably the examples we are disagreeing over. Once I convince you of the general truth at higher ground, it makes both your and my examples seem trifling in comparison. Our disagreement, especially if you really had a problem with my behavior, is not so much a disagreement as a footnote. We move on.
For example, let’s say you tell me: “You should turn the lights off before you leave the house. We don’t want to waste electricity.” There may be two implicit supporting arguments here: (1) it saves money, and (2) it is good for the environment.
Let’s take the first one just to demonstrate how the high-ground maneuver works. If your argument is that turning off the electricity saves money, I have to now convince you of the “bigger picture.” I have to offer an even more abstracted concept that encompasses your example.
I may argue that “we really don’t have to worry about the money, and arguing over little things like turning off the lights will actually make us worse off.” I may also argue that “worrying about little things like turning off the lights means we can’t focus on bigger things like our jobs and families and so on.”
You may or may not be convinced by these - after all, they’re logical fallacies. But you’ll notice that none of them refuted the original argument that turning off the electricity saves money. They simply trivialized it.
In other words, the high ground maneuver reframes the argument and reprioritizes what is important. If I can persuade you of a larger, more general truth that contains your example, then your argument is automatically refuted. It’s hard to catch the fact that I simply switched the context, ensuring that we’d be arguing on a different plane of abstraction. Arguing about different things is a logical fallacy - an “illegal move” in debate - but it can be hard to spot.
Winning an argument without actually winning has a lot of allure, but like all tactics in the field of persuasion, this one in particular needs a lot of ethical consideration. Even over the past few years, I’ve used it unintentionally (given my tendency to abstract up) and, fortunately, Aaron has been particularly good at catching it.
The high-ground maneuver reminds us why it’s so important to define our terms when we engage in any debate. We need to make sure we’re talking about the same thing on the same level of abstraction. If not, someone can take the argument to higher ground, such that there’s no resolution at all on what we were originally talking about. It’s a way to neuter productive discussion, especially for someone trying to escape responsibility. And the better we can identify the tactic, the sooner we can return to the original discussion.
Thanks for reading,
Alex
[1] Plato would call this a Form. Where Plato erred was in believing that Forms are objective, immutable and eternal, as if baked into the universe itself. Instead, they’re simply patterns identified by people (and conscious beings more generally), and as those patterns evolve, so do the Forms.