It’s important to not know some things. This is so counterintuitive because we’re used to looking things up. We worship knowledge while discarding the importance of the process. We don’t ask what we expect to see, and this is a mistake. Unless we start with ignorance, it’s impossible to use the scientific method. Running an experiment requires a degree of emptiness in your mind. You should have an open mind, and it should stay as empty as you can possibly manage.
First of all, information can be wrong or misleading
You wouldn’t expect a non-expert to have anything to teach an expert. Socrates questioned experts until contradictions sprung up. You wouldn’t expect someone who “knows nothing” to discover something interesting or important outside his area of expertise. It also undermined the value of knowledge itself, even commonly held knowledge.
In the Double Helix, James Watson (the co-discoverer of the double helix) has a quote (@alexgusey tweeted this at some point) where he says something along the lines of: “data can be worse than wrong, it can be misleading”. And then he says something like: “a lot of people think you need to fit all the data. You don’t. You want to fit against the right data.” He’s warning against overfitting. Consistency is important if you can trust the information you have. Otherwise, you have to discard data with the knowledge that sometimes it’ll come back with a vengeance and tear down your beautiful edifice.
Consistency is overrated
This approach extends to other pursuits as well. Some of the hardest programming bugs I’ve had to fix are ones where I leaned on wrong information for too long. You can be a logical savant genius and still be wrong if your foundation is wrong 1.
Devon Zuegel has a good way of putting this: “If you enforce consistency above all else, you’re likely to be totally wrong about everything”. Once you poison your foundation, everything built on top of it starts to wilt. Or in religious terms, build your house upon the rock.
Another variation on Zuegel’s statement is that you don’t want to depend on the order in which you discover information. When you put too much importance on consistency, you naturally prioritize information you learned first. So you might reject new information because it contradicts what you already “know”. Many people do this because it’s difficult to think of yourself as a rational person if you hold seemingly irreconcilable beliefs.
The scientific method relies on ignorance
Rational people like to boast that they will update their views when new information comes in. This great up to a point. If a scientific theory says X is always true, you need to find only one example where it isn’t.
Nutrition is one area where we desperately want answers and simultaneously lack consensus. Following nutrition science is then like following Edison’s individual light bulb experiments, but as a customer! Imagine you go to the store and see a bulb that promises light (according to the latest research). “Latest research” isn’t exactly a ringing endorsement here.
Historians can be scientific
Imagine two people: one person is a historical theorist and the other is a reader of history. Everything that the first one knows has already been penned down. The second is a mystery. What does it mean to be a historical theorist?
If you predicted that double monasteries should have existed based on your understanding of human psychology and society, then you understand their function better than someone who’s discovered this same fact by reading about it. Reading or discovering texts is the historical equivalent of running a scientific experiment. The universe already knows the truth. The only difference is that now a person gets to observe and record what happened.
Physics makes a distinction between theorists and experimenters. In history and philosophy I expect a similar distinction to be important. In physics, we imagine the theorist in an armchair and the experimenter in a lab coat. In history, to the outside observer, both are thumbing through the musty pages of old books. One is predicting and building a mental model while the other is remembering and searching. You can do both. There’s a different mode of thinking in each case. In one, a picture emerges in his head that covers more than what he’s read. In the other mode he’s testing a mental picture against information he hasn’t seen before.
Aside: computers might make it possible for a new kind of historical theorist to exist 2.
The experimenter takes a theory and invents an experiment to determine if the theory is true. The theorist invents the theory. What matters to the experimenter is taking hypotheses and arranging atoms in the universe in such a way that when you prod in just the right way, the universe coughs up the truth. Whether those atoms make up a historical text or the Large Hadron Collider shouldn’t matter. Either way, you’re like a kid shaking one of your gifts under the Christmas tree. You don’t know what’s inside—and you know what you want to be in there—but true joy only comes when you spend less time thinking about what you want and more about what you think your parents (or Santa, or God) might have actually gotten you. Otherwise, you risk spending too much time working up an expectation only to get disappointed.
In sum
Without ignorance, it’s impossible to make predictions or use the scientific method. You have to start with not knowing… and then bridge the gap. Once you’ve read some information, in a sense you’ve already poisoned your mind. You also lose out on the joy of discovering a fact in your imagination first.
Knowledge should be stable over time, but it’s a lagging indicator. We should also be skeptical of what we think we know. Our understanding of the world should strive to be consistent, but not at the expense of everything else. If you try to fit all the data available, you risk overfitting. Some information may be wrong or misleading. However, if you ignore information that doesn’t fit, maybe you’ve fallen in love with your own creation.
Knowing too little comes with its own risks. It’s easy to generate plausible hypotheses when you’re unconstrained. Which idea do you pursue when you can think of a new one that’s just as interesting next week? This is something I haven’t explored in this piece. Maybe it’s worth filtering based on hypotheses you can test. What if you instead filter on simple hypotheses that open up the largest search space? This can provide fertile ground for exciting intellectual exploration. At best, you can pick the fruit of this new orchard for years to come. At worst, intellectuals lose themselves in their exploration of a new land with it bearing little fruit.
If you’re a rational person, you’ll realize that if your foundation is wrong, you’ll start to run into inconsistencies (and probably quite quickly). Something to follow up on is how far can you possibly go on a bad foundation? Is it possible to land someone on the moon based on a system poisoned with deeply rooted lies? I imagine this would be very difficult.
Computers make a new kind of academic possible: a monastic historical theorist. The idea is you hole yourself up in the mountains with only a laptop who’s activity is publicly recorded. Every document or web page you bring up is recorded. This makes it possible to verify whether you’ve read a fact or if you’ve derived it. This distinction is important. Suppose the theorist makes predictions about the future which is not known. How much weight can we give to his predictions? It’s impossible to know unless you have some sort of track record. If he’s able to derive previously discovered knowledge, then you can have more confidence in his predictions. Further, if you know he’s not going to profit personally, then it gives you further confidence that his predictions aren’t meant to manipulate.