Hey all -
Merry Christmas! :) As a gift, here’s a nice long email.
* The Feynman technique
This is not a novel concept, and it’s just a bit of fortuitous branding that Richard Feynman’s name got attached to it, but the Feynman technique is a framework for learning new things. It’s very simple:
Pick a concept you want to learn, such as “what is company stock?”
All of these questions will invariably branch out into even more jargon and concepts, such as “equity,” “preferred stock,” “voting rights,” “limited liability” and so on.
Three things about this framework immediately jump out at me:
(1) Learning something deeply is necessarily a lot of work. It’s not like you’re just answering “what is X.” You’re asking ten questions about it, each trying to get at it from a different angle, and then asking ten questions about each of those underlying concepts. The bottleneck to knowledge then is not so much ability, but effort.
(2) Learning something deeply is largely a function of the questions you ask. The greater “coverage” your questions span on the concept surface area, the better you will understand it. In this framework, asking good questions is a necessity.
(3) Questions are also where new insights come from. Once you understand all the factors which compose “company stock,” you’re able to toggle individual factors on and off, and ask questions such as: “can you have stock with unlimited liability?” or “what would stock in people look like?”
One more from Feynman. It’s short so I highly recommend it.
Everyone goes through bouts of anxiety about their own self-worth: am I actually any good? So did Feynman. Then he had this epiphany.
* Google search tricks
Whenever I’m learning something new, I have two Google tricks I pretty much always use. They’re not foolproof, but they do offer more noise than signal:
(1) “Things I wish I knew about ___”
That one’s gotten more clickbait-y in recent years (what hasn’t?) but it’s still a good launch point. The second:
(2) “___ vs. …”
For example, if I’m trying to learn about data modeling, I’ll check out what Google autosuggests for “data modeling vs. ___.” Often I get a sense of competing paradigms or technologies, which provides a good counterbalance for why I’m learning the specific thing I chose (and not the other thing).
Does anyone have other useful tricks for Google here? Twitter maybe?
In the last newsletter, I wrote that the “fallacy of parsimony” was taking the first answer that comes to mind when trying to explain something. This is wrong, Jerry quickly noted.
To clarify, the “fallacy of parsimony” is not taking the first or most immediate explanation that comes to mind. Rather, that is “availability bias.”
The “fallacy of parsimony” is prematurely taking the simplest explanation among more complex ones. It contrasts with Occam’s Razor - the principle that you should take the simplest explanation. John Haidt says you can, but think about it first, because real life sometimes is complex.
Here’s a great quote by the Buddha:
Holding on to anger is like grasping a hot coal with the intent of throwing it at someone else — you are the one who gets burned.
Though it wasn’t actually by the Buddha and it wasn’t entirely about coal:
By doing this you are like a man who wants to hit another and picks up a burning ember or excrement in his hand and so first burns himself or makes himself stink. - Visuddhimagga IX, 23.
Not as elegant, but still insightful.
Thanks for reading,
 “Once, I said to him, ‘Dick, explain to me, so that I can understand it, why spin one-half particles obey Fermi-Dirac statistics.’ Sizing up his audience perfectly, Feynman said, ‘I’ll prepare a freshman lecture on it.’ But he came back a few days later to say, ‘I couldn’t do it. I couldn’t reduce it to the freshman level. That means we don’t really understand it.’”
 I also love how nothing matters more to him than finishing that damn book. I think one of Feynman’s greatest assets was his insatiable - even maniacal - curiosity.