Year after year, students in my class on the Russian novel have told me that my lectures directly contradict what they learn in other courses, especially in the social sciences. My lectures explicate Dostoevsky’s ideas about free choice and responsibility and examine Tolstoy’s belief that events are fundamentally contingent and uncertain. Elsewhere, these writers’ ideas are dismissed as pre-scientific superstitions. At last, a few students organized a debate between me and a popular psychology professor.
That professor began his Introduction to Psychology with the statement: “Science has shown there is no such thing as the soul.” He compared belief in free will, or anything else that contradicted an iron determinism, to faith in “little green men on Mars.” The concept of freedom was simply a relic of the religious past. Most students accepted this view, although they worried that it seemed to leave no room for human responsibility, for good and evil, for striving, and most importantly, for meaning. It also gave them no reason to think of human beings as infinitely valuable or to regard them as different from any other piece of complex, organized matter. One pre-medical student paraphrased the view that his courses implied: “A patient is just a piece of meat with medical insurance.”
What makes such crude materialism seem not just sophisticated but also persuasive?
What makes such crude materialism seem not just sophisticated but also persuasive? The idea is disarmingly simple: everything has a cause and all causes are governed by the laws of nature. Whatever happens has been completely determined by those natural laws, and so nothing else could have happened. These students and the psychology professor would have accepted Laplace’s contention that a calculating demon, who knew all the laws of nature and all the facts obtaining at any given moment, could predict every future event or retrodict every past event. It would just be a matter of plugging in the numbers. Time is nothing more than the letter “t” in a series of equations.
To be sure, they concede, we don’t yet know all the laws and facts, but in principle, everything is predictable. I am suspicious of things said to be true “in principle”: the phrase often signals that a superstition is being offered as a proven certainty. In the determinists’ view, the future is fixed. The past could have been nothing but what it was. Whatever agonies of choice Caesar may have suffered, he could not have not crossed the Rubicon; Thomas More could not but have chosen to defy the king. Freedom, contingency, and chance are apparently nothing more than names for ignorance. To the extent we do not know the causes of a person’s actions, we call these actions free; but as knowledge expands, the realm of freedom shrinks in proportion. Laplace contributed greatly to probability theory, but he nevertheless insisted that there are no genuine probabilities because everything is determined: in calculating chances, we are really measuring not the uncertainty of events but the degree of our ignorance.
Determinists add: That much is surely true of inanimate objects, but it must be just as true of people because there is no exemption from the laws of nature.
But don’t we, unlike neutrons, think and decide? The usual answer is that of course we make decisions, but we do not make them out of thin air. We act on the basis of thoughts and we choose with our will, but something causes those thoughts and directs the will. Those causes, if accurately and exhaustively known, would let the knower accurately predict our actions. One character in Dostoevsky imagines constructing a table of logarithms up to 108,000 containing all future events. Another speaks of the nerves in our brains having little tails that determine what we attribute to free will. The second character is paraphrasing the medical physiologist Claude Bernard, but if one were to update his neurophysiological language, the identical point could be made today.
If one prefers a different vocabulary, one might say that our thoughts and acts of will are caused by our upbringing, or by sociological forces, or by the pressures of our culture. Some even attribute our actions to our genes and offer sociobiological explanations of our behavior. These explanations sound very different, but all presume—“in principle”—that what we do is wholly predetermined by outside causes.
In short, we can will what we want but we cannot will what we will. Philosophers refer to this argument by a number of names, such as reconciliationism. They patiently explain that free will and determinism do not conflict. People do choose as they wish, it’s just that those choices are themselves determined. We “freely” will what we are certain to do. The key move is that freedom is redefined: it no longer means doing something that has not been wholly predetermined, but doing something that is not compelled by other people. We are free when we do not have a gun at our head; if that is all freedom means, it is certainly compatible with determinism.
In The Brothers Karamazov, Ivan is torn to pieces by two contradictory beliefs. He fully understands the implications of the deterministic view of the universe as a closed chain of causality with only one thing possible at any given moment. If that is so, then right and wrong really do not exist; we call things right and wrong only because the chain of causality leads us to do so. Our neurons, our social conditioning, or our genes make us apply moral terms, but nothing is right or wrong in the nature of things. Our moral reactions resemble our disgust at noxious substances: we feel them, but there is no real question of evil or sin. When I drop a pencil and it falls to the ground, it would be absurd to ask whether gravity is moral or immoral. The laws that govern our decisions about what to do are more complex than those governing the fall of a pencil, but, like all natural and social laws, they, too, are neither moral nor immoral. They just do what they do. It’s a bleak picture, perhaps, but we have to face it. This view appeals in part because it endows those who believe it with an unflinching heroism in recognizing the unadorned truth about ourselves that less courageous people refuse to accept.
And yet, we can’t escape the conviction—more powerful than any philosophical argument—that some actions just are evil. Ivan Karamazov offers us a bone-chilling catalogue of child abuse cases, which Dostoevsky culled from the Russian press. We can add horrors Dostoevsky could not: Auschwitz; the Soviet Gulag which took some thirty million lives, the Cambodian killing fields, where a quarter of the Cambodian people were exterminated; and many more. We feel moral indignation at these events, but, if one accepts the deterministic view I have outlined, such indignation is senseless. For to feel indignation or assign blame is to wish that events had been different. But if determinism is true, they could no more have been different than that pencil could have refused to fall.
Unable to reconcile his profound sense of evil with his belief that nothing could be otherwise, Ivan goes mad. “What does it matter to me that effect follows cause?,” he tells his brother Alyosha: “I must have justice or I will destroy myself.” But there is no justice, and no other way things could be, because natural laws operate automatically. Time is closed: at any moment, only one thing can happen. To wish that child had not been tortured—why, you might as well wish the English had won the Hundred Years’ War.
If one accepts this picture, life resembles a novel we are in the midst of reading.
If one accepts this picture, life resembles a novel we are in the midst of reading. We do not yet know what will happen to the heroine, but it is certain, already written, there on the last page. No hopes or fears can make any difference. When I was a child, I did actually root for the English while reading a long encyclopedia article on the Hundred Years’ War. I can’t tell you what pain it caused me that Joan of Arc was French. And yet, as I well knew, it was absurd to root for an outcome that had already happened. If time is closed, hopes or fears for the future are just as senseless because the outcome, though it hasn’t happened yet, is as certain as if it had. The events on page 900 are already written, even if we are still only on page 400.
The deterministic view that I have outlined can be neither proven nor disproven, but there are, I think, good reasons to question it. My colleague in psychology claimed the authority of science: “Science has shown that there is no such thing as the soul.” May I ask for the experiment that has shown this? Is it not clear that not only has there been no such experiment but that there could be none? After all, a scientific experiment is one where the hypothesis might turn out to be false, but survives the test. So it must be possible to imagine a scientific experiment that might discover a soul but fails to do so. But how can one run a test for a non-material object?
This objection is designed not to prove the existence of the soul, but to show that my colleague has claimed the authority of science for what is in fact a metaphysical prejudice. His materialism, like that of many people, is really an act of faith, no less than belief in angels. A moment’s reflection ought to tell us that there can be no experiment testing whether all events are wholly caused. One can test this outcome or that, but not all outcomes.
People tend to believe that all events must be caused by discoverable laws because some things clearly are caused by such laws. But nobody doubts that some events are law-governed.
Since both sides agree that laws account for many events, no examples of events that are plainly governed by laws prove determinism. An old Yiddish proverb goes: “For example is no proof.” You have to do more than provide examples;: you have to rule out the possibility of counterexamples, and in the case of determinism, no experiment can do that.
The issue concerns the nature of time. How many outcomes are possible at any given moment? For a determinist, one and only one outcome is possible: in that sense, time is closed. But for an indeterminist, more than one outcome is possible at some moments. That is, there are more real possibilities than actualities. Time is open. The same set of circumstances might lead to a different result. I like to think of events as shadowed by the other possibilities that might have been—I call these “sideshadows.” To understand an event is to grasp not only what did happen but also what else might have happened. Similarly, to understand an action involves understanding what else the actor might have done, and to understand a self is to grasp what else that self might have become. There but for the grace of God go I. We are all capable of living more lives than one, and each of us is composed in part not only by our have-dones but also by our might-have-dones.
Two broad traditions—let me call them the tradition and the counter-tradition— have competed in Western thought. The dominant tradition denies the might-have-been, and the counter-tradition accepts it. Today, determinists often speak as if the belief in open time is a relic of the prescientific past, but most Western theologians have insisted that time is closed, that one and only one event can happen at any given moment. Otherwise, it has been said, God would not be omniscient.
With a slight change of vocabulary, modern secular views and older theological ones affirming closed time are virtually identical. Both then and now, the believers in closed time have faced counter-traditional opponents who have accepted open time. The line separating the traditions does not run between the religious and the non-religious, because believers in both closed and open time can be found on either side. Modern determinism derives directly from the dominant theological tradition of closed time and uses astonishingly similar arguments.
From Augustine to Luther and Calvin, most Christian theologians have insisted precisely that one and only one thing can happen at any given moment. To believe otherwise would be to deny God’s omniscient foreknowledge and commit a heresy. But since Christian theology holds that we are free, and so are responsible for our actions, theologians have defined freedom in such a way that it does not conflict with foreknowledge. We are free to do what we like, but God, knowing everything about us, can foresee what we will freely choose. As Milton’s God explains in Paradise Lost, He made men and angels “Sufficient to have stood, though free to fall.” If they had not been free, their allegiance to God would have been of no value. The fact that God can foresee to the last detail what men will do does not compromise their freedom in the least: “If I foreknew, Foreknowledge had no influence on their fault,/ Which had proved no less certain unforeknown.” I think you can see that this argument is essentially the same as the determinist “reconciliation” of freedom and determinism. In both cases, freedom means mere lack of external compulsion. It does not mean that more than one genuine possibility for any event can exist. There is always only one possible outcome, and that outcome is knowable in advance—by God for Milton and “in principle” for determinists.
The link between the older and current formulations of closed time is to be found most obviously in the seventeenth century, concerned as it was to reconcile religious belief with scientific progress. Leibniz famously contended that God chose those laws of nature bound to produce the best possible world. Those laws provide a unique outcome for every moment, inasmuch as anything else would not only compromise divine omniscience but would also lead to a world less than optimal. Just as today people defend determinism by saying that everything must have a cause, Leibniz, who is more precise, says everything must have a “sufficient reason”: that is, there must be a reason fully explaining why this and only this event is possible. That’s a very strong and specific sense of cause.
Later thinkers were to take Newton’s laws as a model for what all social sciences should achieve, but Newton himself made more modest claims. In particular, he was aware that his laws of motion did not fully explain the stability of the solar system, and at one point he even proposed that God occasionally intervened to give things a push and set the motion right. This suggestion scandalized Leibniz, much as it would the psychology professor. For Leibniz and for a modern determinist, there simply cannot be any exceptions to the operation of natural laws. Nothing—whether chance, freedom, contingency, or divine will—enters the chain of causality unpredictably from the outside. Was God such an inferior watchmaker, Leibniz asked, that he could not make the universe so that it would run on its own, but had to make one that needed repair in the form of external interventions? God exists outside of time, he does not enter into history and change the course set by the natural laws He made.
You will not be surprised to learn that Leibniz denied the existence of miracles in the usual sense.
You will not be surprised to learn that Leibniz denied the existence of miracles in the usual sense. Of course, he could not say he denied miracles, for that would involve contradicting the Bible and religious dogma, so he did for the concept “miracle” what Augustine did for “freedom”: he redefined it. It’s a wonderful method, and I recommend it heartily. If you want to deny something but claim to be for it, just redefine it, as the Soviets were to do with terms like “democracy” and “freedom of speech”: and so you could be arrested for slander if you stated that the USSR did not permit freedom of speech. You were guaranteed a secret ballot; all you had to do was go up and ask for one.
Leibniz redefined miracles as events that do indeed follow the laws of nature but occur rarely—like snow in the Sahara—and so to the ignorant they seem to contradict natural laws or to be an intervention from outside the causal chain. Of course, Leibniz also regarded freedom as Augustine did, and maintained, for instance, that the “concept” of Caesar, including the tiniest event of his life, was given from all eternity.
The whole tradition called “natural theology” presumed that since God created the laws governing the world, one could study God’s mind by studying science. So for a long time, religion, far from fearing science, embraced it as pious. But if laws work on their own, who needs God at all? Think about it: take Leibniz’s model, and simply subtract God—just stop talking about Him, and talk about His laws—and you have precisely the determinist view claimed for science in the nineteenth and twentieth centuries. Indeed, Leibniz’s contemporary Spinoza, who seems astonishingly modern, repeatedly refers to “God or Nature,” as if they were identical, so one might as well just speak of nature alone.
The heritage of Leibnizian thinking and the idea of a perfect world made by a perfect God left their mark on the social sciences in ways that are often overlooked. Newton’s laws proved inspiring because there are so few of them: a vast complexity of astronomical phenomena, which had bedeviled thinkers since antiquity, proved explicable by a handful of laws. No wonder Newton is so often regarded as the greatest scientist who ever lived. From his time on, would-be social theorists tacitly assumed that behind the vast diversity of the world, underlying the dizzying disarray of history, and supporting the immense complexity of the mind, a few simple laws must govern, just as they do in astronomy—a view often called “moral Newtonianism.”
You can see that this conclusion would be tempting, almost automatic, and yet it does not follow. Tolstoy was immensely fond of pointing out the logical fallacies here. Why must laws necessarily be few in number? In War and Peace, Tolstoy asks why it could not be that if we looked behind the mass of contingent historical events, things would not simplify, but complexify. Perhaps behind each contingency would lie many more. Causes might ramify and perhaps the sim-plest paraphrase of the universe is not some simple equation but the universe itself. In that case, prediction would be impossible even in principle.
In short, prediction—even in principle—is out of the question either if laws merely shape events while still allowing for some free play or if the laws are so complex that knowing them is no easier than knowing all the facts in the universe. Both possibilities leave us dealing with contingency.
The world seemed so elegant to seventeenth-century thinkers and their followers. Nature was so artfully arranged, so amenable to mathematical redescription, and so efficient in carrying out its tasks, that an ethic of optimality seemed a sine qua non of any good theory. An aesthetic sensibility governed the taste for symmetry, simplicity, and optimality. That sensibility has persisted. Consider, for instance, the way contemporary economics is taught. That discipline’s claim to be the most scientific of the social sciences, its great prestige in universities, and its proven ability to get other disciplines to borrow its models, derives from its ability to start with a few basic axioms and then derive an immense variety of phenomena from them, all expressible in mathematical equations. Here the test of a science becomes resemblance not only to Newton’s laws but also to that other great model, Euclidian geometry. I have long thought that when high schools teach Euclidian geometry they typically omit the most important thing about it, that it provides one model for what true knowledge should look like: completely sure, without ambiguity, self-enclosed, symmetrical, and absolutely economical.
Optimality became a key concept for economists. In a free market, competition relentlessly drives things to an optimal point. Whatever is less than optimal will be driven out by something better, a logic that is often called Darwinian in another apparent attempt to claim the authority of science. Optimality insures that there is a single answer to all questions, because there is one and only one optimal point. Leibniz employed the very same logic when he claimed that even problems of science could be adjudicated by assuming that the best state of affairs must obtain in this best of all possible worlds. We have seen that he regarded Newton’s laws as incomplete because they required something outside to explain the perfect movement of the solar system. Leibniz knew in advance that the best, which meant most stable, most economic, most self-contained, most perfect condition simply had to obtain.
A century later, Laplace claimed to have perfected Newton’s model and to have fulfilled Leibniz’s program by showing the stability of the solar system. Thus we have the famous anecdote when Napoleon, to whom Laplace explained his theory, asked about the role of God, and Laplace replied, “I don’t need that hypothesis.” Please note that this is not necessarily an atheistic remark, and it is one both Leibniz and Spinoza would applaud on theological grounds. It endorses self-contained perfection, whether that perfection was originally set up by God or not.
So convinced are economists of optimality that when, about a decade or so ago, it was proposed that sometimes suboptimal results obtained, it created a scandal. The example given was our QWERTY keyboard, which is not the most efficient, but which took hold for contingent reasons and then could not be dislodged because no one wanted to learn a new system—a phenomenon called “lock-in.” The most significant aspect of this controversy is that the very possibility of suboptimality was assumed by both sides to be scandalous. What if QWERTY were not an isolated case?
If, as the Euclidian model suggests and economists presume, everything could in principle be derived from axioms or laws, then narrative as a form of explanation would be unnecessary. That view of economics is, in fact, the reason that for the past half century doctoral training in ec-onomics, which once included a good deal of economic history, now has almost entirely eliminated it. Narrative: that is what one uses when one cannot write an equation in which time is simply represented as the parameter “t.” I have known social scientists who have argued quite explicitly that to the extent a discipline requires narrative, it has not yet reached scientific status.
They mean something like this: one could, of course, describe the orbit of Mars as a story—in January it was here, then in May it reached there, and in December it at last skidded in over there—but such a story would be ridiculous because mathematical formulae precisely specify Mars’s position at any time t without a narrative. Ideally, all knowledge will be expressed in laws and not stories. Only if more than one possibility existed for some events would a narrative be necessary to describe the alternatives, the turning points, and the consequences of the roads that were but did not have to be taken. So we may reformulate the question of time as follows: is narrative essential for describing some aspects of the world, or is it, as in astronomy, ultimately dispensable?
Closed time, optimality, symmetry, perfection, and the belief that behind the apparent complexity of the world simplicity reigns: these assumptions define one intellectual tradition. In it, there is no room for contingency in the sense Aristotle defined the term: that an event that can either be or not be. By contrast, the counter-tradition has insisted that contingency is essential to the world. In this view, prior causation, though it limits outcomes, does not reduce the possibilities to one. If we imagine all conceivable possibilities arranged as a circle, prior causality may allow for only one degree of arc, but it does not specify a single point. There is some free play, and even one degree of difference, if it concatenates minute by minute, may soon make the world radically unpredictable. If by a social science we mean one capable of predicting events on the model of Newtonian astronomy, the counter-tradition regards the very notion as absurd.
Years ago, when there were two Germanies, I remember having trouble remembering whether the German Democratic Republic was the Communist dictatorship and the Federal Republic the parliamentary democracy or the reverse, until an important principle—I call it the principle of nomination—occurred to me. The state that calls itself democratic isn’t. Now consider: we have one discipline called physics and another called political science.
In War and Peace, Tolstoy takes the possibility of a “science of warfare” as a metaphor for any conceivable social science.
In War and Peace, Tolstoy takes the possibility of a “science of warfare” as a metaphor for any conceivable social science. The generals who believe they have such a science—who think their method allows them to “foresee all contingencies” and predict the outcome of battles—lose. Something unforeseen and unforeseeable always takes place. The novel’s hero, Prince Andrei, eventually learns that there can be no science of battle or of anything else in the social world, because genuine contingency reigns. He asks: “What science can there be, in a matter in which, as in every practical matter, nothing can be determined and everything depends on innumerable conditions, the significance of which becomes manifest at a particular moment, and no one can tell when that moment will come?” When Pierre, who believes in perfect knowledge, speaks of eliminating contingencies, Andrei tells him that is all nonsense. “What are we facing tomorrow? A hundred million diverse chances, which be decided on the instant by whether we run or they run, whether this man or that man is killed.”
For thinkers like the physicist Freeman Dyson and the paleontologist Stephen Jay Gould, passages like these have made the name Tolstoy a shorthand for a concept: contingency. Tolstoy is the thinker to whom people turn when they want to begin with contingency. Despite what many say, an appreciation of contingency is not anti-scientific, but anti-pseudo-scientific. For a truly scientific approach faces the observed facts, and the fact is that no one can tell whom a bullet will hit—a quarter millimeter may make a difference—and there are so many contingencies in the social world that prediction is impossible.
Andrei keeps stressing what will be decided “on the instant” and what will happen “at a particular moment, and no one can tell when that moment will come.” Andrei has a strong sense of presentness—that what happens at any particular moment really matters because something else could have happened. The moment is truly momentous. What chances happen, what decisions are made, truly make a difference.
One cannot just plug in the numbers. What will happen later is not already given. The novel of our lives is not already written, it is in process, and the next chapter is up for grabs. Possibilities exceed actualities. Whatever happens, something else might have. That is why narrative is essential to explanation. For the tradition, narrative is something to be overcome. For the counter-tradition, the world itself possesses narrativeness.
Once one learns to think in this way, one’s whole approach to life, to decision-making, and to ethics changes. If one lives in a world where certainty is possible and (as almost everyone who believes in certainty adds) close at hand, then comprehensive advance planning is what almost always makes sense. Thus the foolish generals in War and Peace stay up late working out battle plans, whereas the wise general Kutuzov falls asleep at councils of war. He remarks that the best preparation for a battle is not more plans but “a good night’s sleep.” In a world of relative certainty, planning is most effective. In a world of radical uncertainty, what matters most is alertness, so one can take advantage of the unforeseeable opportunities of the moment. The night before an SAT, don’t study any more, get some sleep.
I know a Russian economist, Aron Katsenelinboigen, who once sat on the commission planning the entire Soviet economy and, eventually recognizing the impossibility of responding to constantly unpredictable obstacles or opportunities, gave up the ideal and spent the rest of his life trying to understand how best to orient oneself to uncertainty: War and Peace was his bible.
Aron liked to ask impishly why evolution, which has designed structures so complex we cannot come close to making them ourselves—think of the liver—has never designed an animal with wheels. Wheels seem so efficient: who would walk, rather than drive, to San Francisco? As I pictured a wheeled cheetah speeding across the Serengeti, Aron reminded me that the world is not paved. In a world of relative certainty, like our highway system, wheels make sense, but imagine a wheeled animal trying to get around a fallen tree in a forest. Legs, though less efficient in a regular environment, are much more versatile. Their universality testifies to the most fundamental fact about the biological world: it is uncertain.
In the social world it is even more fundamental that order is not given, does not underlie apparent disorder, but is always the result of work. The anthropologist Gregory Bateson composed a series of dialogues with his daughter, who posed the question: why is it that if my room is neat and I pay no attention it will soon get messy, but if my room is messy and I pay no attention it never gets neat? Things messify, never neatify. They tend to disorder, and the fundamental state of the social world is mess.
The ideal of perfection derives from the notion of a divine creator executing a plan at a single moment and making everything fit. With the death of God, the same picture of the world has been preserved by various God substitutes, which, without God, do what God would do. Natural selection, the invisible hand or other mechanisms insure optimality. By contrast, the counter-tradition does without such God substitutes, and sees everything as the product of history, of contingent events constraining later events, so that nothing looks as if it had been flawlessly designed.
The belief that natural selection and the invisible hand insure optimality has become a kind of materialist religion.
The belief that natural selection and the invisible hand insure optimality has become a kind of materialist religion. It makes the social world operate like Leibniz’s God. In fact, Smith and Darwin said exactly the opposite, both insisting on irrationality and imperfection. Most of The Wealth of Nations is devoted to narrative—and it is a narrative of economic history in which the main force is “human folly.” Darwin described natural selection as a loose regulating principle that emphatically does not insure optimality. Darwin recognized that it is imperfection, not perfection, that testifies to a historical process. A perfect being executing a perfect plan at an instant might make everything fit, but a process responding over time to a haphazard series of contingencies would lead to organisms that are imperfectly designed. Each stage would be constrained by forms evolved at earlier stages, and so optimal solutions might not be available. Moreover, vestigial organs would persist. Darwin’s favorite example was a certain species of mole that lives its entire life underground and yet has eyes, but even if the mole went above ground, its eyes would be useless since they are occluded with a thick membrane. Any bodily organ requires a certain amount of energy, so one that does not pay its way is palpably harmful to survival. No one designing an optimal mole would give it those eyes, which therefore testify to a historical process and a time when eyes were useful.
The founder of cultural anthropology as a discipline, Malinowski, insisted that his new science would soon be capable of prediction. Everything in a given culture is optimally designed and fits together perfectly. Society resembles a perfect poem by a great artist, which is why the present—and to me absurd—movement of cultural studies has been able to extend the insight and “read culture like a poem.” Just as a good poem eliminates the less than perfect solutions of early drafts, so in culture, Malinowski insists, there can never be mere useless survivals persisting from earlier stages. Nothing could be further from Darwin and his mole. The structuralist Lévi-Strauss and the philosopher Michel Foucault also insisted on perfect fit and the absence of survivals. I always want to ask if these thinkers lacked an appendix.
Although the dominant tradition of Western theology has insisted on perfection and closed time, an opposing tradition does exist. It is to be found first of all in the Bible itself, where God, quite explicitly, speaks as someone who can be surprised by events and who acts from within time with uncertain results. In the Noah story, He regrets that he made man, and regret is one of those emotions that presuppose lack of foresight. God makes the rainbow to remind himself never to bring another such flood, which means he is capable of forgetting and knows it. In the story of the binding of Isaac, God explicitly does not know whether Abraham will pass the test. Jewish and Christian theologians have traditionally argued that such passages, which contradict divine perfection and omniscience, were simply concessions to a primitive tribe’s limited understanding, but what if we were to believe the plain word of Holy Writ rather than a theologian’s contrary gloss? Some have—and those who do accept open time.
My social science colleagues have actually adopted an outmoded view of science. The psychology professor said that determinism necessarily obtains except in certain remote areas of quantum physics, which don’t matter. He was alluding to the notorious fact that in quantum physics two identical systems can develop in different directions: that is to say, the past allows for more than one option, exactly as the proponents of open time say. The fact that these experiments are remote is beside the point. Determinism is an all-or-nothing doctrine. The question is not whether some things are given by the past—no one doubts that—but whether everything is, and if there is one exception, determinism is false.
More important, the real appeal of determinism, the bedrock feeling that I think really persuades people of it, is the sense that things could be no other way, that any other vision is simply incoherent. Once we know that in spite of that feeling, things emphatically can be and some are another way, that determinist argument is revealed for what it is: a metaphysical prejudice, which contradicts the facts and genuinely scientific scrutiny.
When the determinist paradigm at last fades, I think people will have trouble even imagining how it could have seemed so compelling, as we now wonder how the existence of witches could have seemed indubitable. After all, if a theory contradicts virtually all our experience, the burden of proof should be on those who tell us our experience is mistaken, not the reverse.
The hubris of the dominant tradition leads to awful consequences. We have just lived through what is by far the bloodiest century in human history, and blood was shed above all by those who believed in some deterministic system claiming to be a science— like “scientific socialism.” Historically, officially atheist regimes have been far bloodier than religious ones. The inquisition can’t hold a candle to the KGB. I think if there is one lesson the twentieth century ought to teach us, it is: be wary of claims of scientific status.
This article originally appeared in The New Criterion, Volume 23 Number 9, on page 17
Copyright © 2024 The New Criterion | www.newcriterion.com