Sunday, January 22, 2017

Reflections on the zombie-scientist problem.

The huge cultural authority science has acquired over the past century imposes large duties on every scientist. Scientists have acquired the power to impress and intimidate every time they open their mouths, and it is their responsibility to keep this power in mind no matter what they say or do. Too many have forgotten their obligation to approach with due respect the scholarly, artistic, religious, humanistic work that has always been mankind’s main spiritual support. Scientists are (on average) no more likely to understand this work than the man in the street is to understand quantum physics. But science used to know enough to approach cautiously and admire from outside, and to build its own work on a deep belief in human dignity. No longer.
Today science and the “philosophy of mind”—its thoughtful assistant, which is sometimes smarter than the boss—are threatening Western culture with the exact opposite of humanism. Call it roboticism. Man is the measure of all things, Protagoras said. Today we add, and computers are the measure of all men.
Many scientists are proud of having booted man off his throne at the center of the universe and reduced him to just one more creature—an especially annoying one—in the great intergalactic zoo. That is their right. But when scientists use this locker-room braggadocio to belittle the human viewpoint, to belittle human life and values and virtues and civilization and moral, spiritual, and religious discoveries, which is all we human beings possess or ever will, they have outrun their own empiricism. They are abusing their cultural standing. Science has become an international bully.
Nowhere is its bullying more outrageous than in its assault on the phenomenon known as subjectivity.
Your subjective, conscious experience is just as real as the tree outside your window or the photons striking your retina—even though you alone feel it. Many philosophers and scientists today tend to dismiss the subjective and focus wholly on an objective, third-person reality—a reality that would be just the same if men had no minds. They treat subjective reality as a footnote, or they ignore it, or they announce that, actually, it doesn’t even exist.
If scientists were rat-catchers, it wouldn’t matter. But right now, their views are threatening all sorts of intellectual and spiritual fields. The present problem originated at the intersection of artificial intelligence and philosophy of mind—in the question of what consciousness and mental states are all about, how they work, and what it would mean for a robot to have them. It has roots that stretch back to the behaviorism of the early 20th century, but the advent of computing lit the fuse of an intellectual crisis that blasted off in the 1960s and has been gaining altitude ever since.
Bullying Nagel.
The modern “mind fields” encompass artificial intelligence, cognitive psychology, and philosophy of mind. Researchers in these fields are profoundly split, and the chaos was on display in the ugliness occasioned by the publication of Thomas Nagel’s Mind & Cosmos in 2012. Nagel is an eminent philosopher and professor at NYU. In Mind & Cosmos, he shows with terse, meticulous thoroughness why mainstream thought on the workings of the mind is intellectually bankrupt. He explains why Darwinian evolution is insufficient to explain the emergence of consciousness—the capacity to feel or experience the world. He then offers his own ideas on consciousness, which are speculative, incomplete, tentative, and provocative—in the tradition of science and philosophy.
Nagel was immediately set on and (symbolically) beaten to death by all the leading punks, bullies, and hangers-on of the philosophical underworld. Attacking Darwin is the sin against the Holy Ghost that pious scientists are taught never to forgive. Even worse, Nagel is an atheist unwilling to express sufficient hatred of religion to satisfy other atheists. There is nothing religious about Nagel’s speculations; he believes that science has not come far enough to explain consciousness and that it must press on. He believes that Darwin is not sufficient.
The intelligentsia was so furious that it formed a lynch mob. In May 2013, the Chronicle of Higher Education ran a piece called “Where Thomas Nagel Went Wrong.” One paragraph was notable:
Whatever the validity of [Nagel’s] stance, its timing was certainly bad. The war between New Atheists and believers has become savage, with Richard Dawkins writing sentences like, “I have described atonement, the central doctrine of Christianity, as vicious, sadomasochistic, and repellent. We should also dismiss it as barking mad….” In that climate, saying anything nice at all about religion is a tactical error.
It’s the cowardice of the Chronicle’s statement that is alarming—as if the only conceivable response to a mass attack by killer hyenas were to run away. Nagel was assailed; almost everyone else ran.
The Kurzweil Cult.
The voice most strongly associated with what I’ve termed roboticism is that of Ray Kurzweil, a leading technologist and inventor. The Kurzweil Cult teaches that, given the strong and ever-increasing pace of technological progress and change, a fateful crossover point is approaching. He calls this point the “singularity.” After the year 2045 (mark your calendars!), machine intelligence will dominate human intelligence to the extent that men will no longer understand machines any more than potato chips understand mathematical topology. Men will already have begun an orgy of machinification—implanting chips in their bodies and brains, and fine-tuning their own and their children’s genetic material. Kurzweil believes in “transhumanism,” the merging of men and machines. He believes human immortality is just around the corner. He works for Google.
Whether he knows it or not, Kurzweil believes in and longs for the death of mankind. Because if things work out as he predicts, there will still be life on Earth, but no human life. To predict that a man who lives forever and is built mainly of semiconductors is still a man is like predicting that a man with stainless steel skin, a small nuclear reactor for a stomach, and an IQ of 10,000 would still be a man. In fact we have no idea what he would be.
Each change in him might be defended as an improvement, but man as we know him is the top growth on a tall tree in a large forest: His kinship with his parents and ancestors and mankind at large, the experience of seeing his own reflection in human history and his fellow man—those things are the crucial part of who he is. If you make him grossly different, he is lost, with no reflection anywhere he looks. If you make lots of people grossly different, they are all lost together—cut adrift from their forebears, from human history and human experience. Of course we do know that whatever these creatures are, untransformed men will be unable to keep up with them. Their superhuman intelligence and strength will extinguish mankind as we know it, or reduce men to slaves or dogs. To wish for such a development is to play dice with the universe.
Luckily for mankind, there is (of course) no reason to believe that brilliant progress in any field will continue, much less accelerate; imagine predicting the state of space exploration today based on the events of 1960–1972. But the real flaw in the Kurzweil Cult’s sickening predictions is that machines do just what we tell them to. They act as they are built to act. We might in principle, in the future, build an armor-plated robot with a stratospheric IQ that refuses on principle to pay attention to human beings. Or an average dog lover might buy a German shepherd and patiently train it to rip him to shreds. Both deeds are conceivable, but in each case, sane persons are apt to intervene before the plan reaches completion.
Banishing Subjectivity. 
Subjectivity is your private experience of the world: your sensations; your mental life and inner landscape; your experiences of sweet and bitter, blue and gold, soft and hard; your beliefs, plans, pains, hopes, fears, theories, imagined vacation trips and gardens and girlfriends and Ferraris, your sense of right and wrong, good and evil. This is your subjective world. It is just as real as the objective physical world.
This is why the idea of objective reality is a masterpiece of Western thought—an idea we associate with Galileo and Descartes and other scientific revolutionaries of the 17th century. The only view of the world we can ever have is subjective, from inside our own heads. That we can agree nonetheless on the observable, exactly measurable, and predictable characteristics of objective reality is a remarkable fact. I can’t know that the color I call blue looks to me the same way it looks to you. And yet we both use the word blue to describe this color, and common sense suggests that your experience of blue is probably a lot like mine. Our ability to transcend the subjective and accept the existence of objective reality is the cornerstone of everything modern science has accomplished.
But that is not enough for the philosophers of mind. Many wish to banish subjectivity altogether. “The history of philosophy of mind over the past one hundred years,” the eminent philosopher John Searle has written, “has been in large part an attempt to get rid of the mental”—i.e., the subjective—“by showing that no mental phenomena exist over and above physical phenomena.”
Why bother? Because to present-day philosophers, Searle writes, “the subjectivist ontology of the mental seems intolerable.” That is, your states of mind (your desire for adventure, your fear of icebergs, the ship you imagine, the girl you recall) exist only subjectively, within your mind, and they can be examined and evaluated by you alone. They do not exist objectively. They are strictly internal to your own mind. And yet they do exist. This is intolerable! How in this modern, scientific world can we be forced to accept the existence of things that can’t be weighed or measured, tracked or photographed—that are strictly private, that can be observed by exactly one person each? Ridiculous! Or at least, damned annoying.
And yet your mind is, was, and will always be a room with a view. Your mental states exist inside this room you can never leave and no one else can ever enter. The world you perceive through the window of mind (where you can never go—where no one can ever go) is the objective world. Both worlds, inside and outside, are real.
The ever astonishing Rainer Maria Rilke captured this truth vividly in the opening lines of his eighth Duino Elegy, as translated by Stephen Mitchell: “With all its eyes the natural world looks out/into the Open. Only our eyes are turned backward….We know what is really out there only from/the animal’s gaze.” We can never forget or disregard the room we are locked into forever.
The Brain as Computer.
The dominant, mainstream view of mind nowadays among philosophers and many scientists is computationalism, also known as cognitivism. This view is inspired by the idea that minds are to brains as software is to computers. “Think of the brain,” writes Daniel Dennett of Tufts University in his influential 1991 Consciousness Explained, “as a computer.” In some ways this is an apt analogy. In others, it is crazy. At any rate, it is one of the intellectual milestones of modern times.
How did this “master analogy” become so influential?
Consider the mind. The mind has its own structure and laws: It has desires, emotions, imagination; it is conscious. But no mind can exist apart from the brain that “embodies” it. Yet the brain’s structure is different from the mind’s. The brain is a dense tangle of neurons and other cells in which neurons send electrical signals to other neurons downstream via a wash of neurotransmitter chemicals, like beach bums splashing each other with bucketfuls of water.
Two wholly different structures, one embodied by the other—this is also a precise description of computer software as it relates to computer hardware. Software has its own structure and laws (software being what any “program” or “application” is made of—any email program, web search engine, photo album, iPhone app, video game, anything at all). Software consists of lists of instructions that are given to the hardware—to a digital computer. Each instruction specifies one picayune operation on the numbers stored inside the computer. For example: Add two numbers. Move a number from one place to another. Look at some number and do this if it’s 0.
Large lists of tiny instructions become complex mathematical operations, and large bunches of those become even more sophisticated operations. And pretty soon your application is sending spacemen hurtling across your screen firing lasers at your avatar as you pelt the aliens with tennis balls and chat with your friends in Idaho or Algiers while sending notes to your girlfriend and keeping an eye on the comic-book news. You are swimming happily within the rich coral reef of your software “environment,” and the tiny instructions out of which the whole thing is built don’t matter to you at all. You don’t know them, can’t see them, are wholly unaware of them.
The gorgeously varied reefs called software are a topic of their own—just as the mind is. Software and computers are two different topics, just as the psychological or phenomenal study of mind is different from brain physiology. Even so, software cannot exist without digital computers, just as minds cannot
exist without brains.
That is why today’s mainstream view of mind is based on exactly this analogy: Mind is to brain as software is to computer. The mind is the brain’s software—this is the core idea of computationalism.
Of course computationalists don’t all think alike. But they all believe in some version of this guiding analogy. Drew McDermott, my colleague in the computer science department at Yale University, is one of the most brilliant (and in some ways, the most heterodox) of computationalists. “The biological variety of computers differs in many ways from the kinds of computers engineers build,” he writes, “but the differences are superficial.” Note here that by biological computer, McDermott means brain.
McDermott believes that “computers can have minds”—minds built out of software, if the software is correctly conceived. In fact, McDermott writes, “as far as science is concerned, people are just a strange kind of animal that arrived fairly late on the scene….[My] purpose…is to increase the plausibility of the hypothesis that we are machines and to elaborate some of its consequences.”
John Heil of Washington University describes cognitivism this way: “Think about states of mind as something like strings of symbols, sentences.” In other words: a state of mind is like a list of numbers in a computer. And so, he writes, “mental operations are taken to be ‘computations over symbols.’” Thus, in the cognitivist view, when you decide, plan, or believe, you are computing, in the sense that software computes.
Besmirching consciousness. 
But what about consciousness? If the brain is merely a mechanism for thinking or problem-solving, how does it create consciousness?
Most computationalists default to the Origins of Gravy theory set forth by Walter Matthau in the film of Neil Simon’s The Odd Couple. Challenged to account for the emergence of gravy, Matthau explains that, when you cook a roast, “it comes.” That is basically how consciousness arises too, according to computationalists. It just comes.
In Consciousness Explained, Dennett lays out the essence of consciousness as follows: “The concepts of computer science provide the crutches of imagination we need to stumble across the terra incognita between our phenomenology as we know it by ‘introspection’ and our brains as science reveals them to us.” (Note the chuckle-quotes around introspection; for Dennett, introspection is an illusion.) Specifically: “Human consciousness can best be understood as the operation of a ‘von Neumannesque’ virtual machine.” Meaning, it is a software application (a virtual machine) designed to run on any ordinary computer. (Hence von Neumannesque: the great mathematician John von Neumann is associated with the invention of the digital computer as we know it.)
Thus consciousness is the result of running the right sort of program on an organic computer also called the human brain. If you were able to download the right app on your phone or laptop, it would be conscious, too. It wouldn’t merely talk or behave as if it were conscious. It would be conscious, with the same sort of rich mental landscape inside its head (or its processor or maybe hard drive) as you have inside yours: the anxious plans, the fragile fragrant memories, the ability to imagine a baseball game or the crunch of dry leaves underfoot. All that just by virtue of running the right program. That program would be complex and sophisticated, far more clever than anything we have today. But no different fundamentally, say the computationalists, from the latest video game.
The Flaws.
But the master analogy—between mind and software, brain and computer—is fatally flawed. It falls apart once you mull these simple facts:
1. You can transfer a program easily from one computer to another, but you can’t transfer a mind, ever, from one brain to another.
2. You can run an endless series of different programs on any one computer, but only one “program” runs, or ever can run, on any one human brain.
3. Software is transparent. I can read off the precise state of the entire program at any time. Minds are opaque—there is no way I can know what you are thinking unless you tell me.
4. Computers can be erased; minds cannot.
5. Computers can be made to operate precisely as we choose; minds cannot.
There are more. Come up with them yourself. It’s easy.
There is a still deeper problem with computationalism. Mainstream computationalists treat the mind as if its purpose were merely to act and not to be. But the mind is for doing and being. Computers are machines, and idle machines are wasted. That is not true of your mind. Your mind might be wholly quiet, doing (“computing”) nothing; yet you might be feeling miserable or exalted, or awestruck by the beauty of the object in front of you, or inspired or resolute—and such moments might be the center of your mental life. Or you might merely be conscious. “I cannot see what flowers are at my feet,/Nor what soft incense hangs upon the boughs….Darkling I listen….” That was drafted by the computer known as John Keats.
Emotions in particular are not actions but states of being. And emotions are central to your mental life and can shape your behavior by allowing you to compare alternatives to determine which feels best. Jane Austen, Persuasion: “He walked to the window to recollect himself, and feel how he ought to behave.” Henry James, The Ambassadors: The heroine tells the hero, “no one feels so much as you. No—not any one.” She means that no one is so precise, penetrating, and sympathetic an observer.
Computationalists cannot account for emotion. It fits as badly as consciousness into the mind-as-software scheme.
The Body and the Mind.
And there is (at least) one more area of special vulnerability in the computationalist worldview. Computationalists believe that the mind is embodied by the brain, and the brain is simply an organic computer. But in fact, the mind is embodied not by the brain but by the brain and the body, intimately interleaved. Emotions are mental states one feels physically; thus they are states of mind and body simultaneously. (Angry, happy, awestruck, relieved—these are physical as well as mental states.) Sensations are simultaneously mental and physical phenomena. Wordsworth writes about his memories of the River Wye: “I have owed to them/In hours of weariness, sensations sweet,/Felt in the blood, and felt along the heart/And passing even into my purer mind…”
Where does the physical end and the mental begin? The resonance between mental and bodily states is a subtle but important aspect of mind. Bodily sensations bring about mental states that cause those sensations to change and, in turn, the mental states to develop further. You are embarrassed, and blush; feeling yourself blush, your embarrassment increases. Your blush deepens. “A smile of pleasure lit his face. Conscious of that smile, [he] shook his head disapprovingly at his own state.” (Tolstoy.) As Dmitry Merezhkovsky writes brilliantly in his classic Tolstoy study, “Certain feelings impel us to corresponding movements, and, on the other hand, certain habitual movements impel to the corresponding mental states….Tolstoy, with inimitable art, uses this convertible connection between the internal and the external.”
All such mental phenomena depend on something like a brain and something like a body, or an accurate reproduction or simulation of certain aspects of the body. However hard or easy you rate the problem of building such a reproduction, computing has no wisdom to offer regarding the construction of human-like bodies—even supposing that it knows something about human-like minds.
I cite Keats or Rilke, Wordsworth, Tolstoy, Jane Austen because these “subjective humanists” can tell us, far more accurately than any scientist, what things are like inside the sealed room of the mind. When subjective humanism is recognized (under some name or other) as a school of thought in its own right, one of its characteristics will be looking to great authors for information about what the inside of the mind is like.
To say the same thing differently: Computers are information machines. They transform one batch of information into another. Computationalists often describe the mind as an “information processor.” But feelings are not information! Feelings are states of being. A feeling (mild wistfulness, say, on a warm summer morning) has, ordinarily, no information content at all. Wistful is simply a way to be.
Let’s be more precise: We are conscious, and consciousness has two aspects. To be conscious of a thing is to be aware of it (know about it, have information about it) and to experience it. Taste sweetness; see turquoise; hear an unresolved dissonance—each feels a certain way. To experience is to be some way, not to do some thing.
The whole subjective field of emotions, feelings, and consciousness fits poorly with the ideology of computationalism, and with the project of increasing “the plausibility of the hypothesis that we are machines.”
Thomas Nagel: “All these theories seem insufficient as analyses of the mental because they leave out something essential.” (My italics.) Namely? “The first-person, inner point of view of the conscious subject: for example, the way sugar tastes to you or the way red looks or anger feels.” All other mental states (not just sensations) are left out, too: beliefs and desires, pleasures and pains, whims, suspicions, longings, vague anxieties; the mental sights, sounds, and emotions that accompany your reading a novel or listening to music or daydreaming.
Functionalism.
How could such important things be left out? Because functionalism is today’s dominant view among theorists of the mind, and functionalism leaves them out. It leaves these dirty boots on science’s back porch. Functionalism asks, “What does it mean to be, for example, thirsty?” The answer: Certain events (heat, hard work, not drinking) cause the state of mind called thirst. This state of mind, together with others, makes you want to do certain things (like take a drink). Now you understand what “I am thirsty” means. The mental (the state of thirst) has not been written out of the script, but it has been reduced to the merely physical and observable: to the weather, and what you’ve been doing, and what actions (take a drink) you plan to do.
But this explanation is no good, because “thirst” means, above all, that you feel thirsty. It is a way of being. You have a particular sensation. (That feeling, in turn, explains such expressions as “I am thirsty for knowledge,” although this “thirst” has nothing to do with the heat outside.)
And yet you can see the seductive quality of functionalism, and why it grew in prominence along with computers. No one knows how a computer can be made to feel anything, or whether such a thing is even possible. But once feeling and consciousness are eliminated, creating a computer mind becomes much easier. Nagel calls this view “a heroic triumph of ideological theory over common sense.”
Some thinkers insist otherwise. Experiencing sweetness or the fragrance of lavender or the burn of anger is merely a biochemical matter, they say. Certain neurons fire, certain neurotransmitters squirt forth into the inter-neuron gaps, other neurons fire and the problem is solved: There is your anger, lavender, sweetness.
There are two versions of this idea: Maybe brain activity causes the sensation of anger or sweetness or a belief or desire; maybe, on the other hand, it just is the sensation of anger or sweetness—sweetness is certain brain events in the sense that water is H2O.
But how do those brain events bring about, or translate into, subjective mental states? How is this amazing trick done? What does it even mean, precisely, to cross from the physical to the mental realm?
The Zombie Argument.
Understanding subjective mental states ultimately comes down to understanding consciousness. And consciousness is even trickier than it seems at first, because there is a serious, thought-provoking argument that purports to show us that consciousness is not just mysterious but superfluous. It’s called the Zombie Argument. It’s a thought experiment that goes like this:
Imagine your best friend. You’ve know him for years, have had a million discussions, arguments, and deep conversations with him; you know his opinions, preferences, habits, and characteristic moods. Is it possible to suppose (just suppose) that he is in fact a zombie?
By zombie, philosophers mean a creature who looks and behaves just like a human being, but happens to be unconscious. He does everything an ordinary person does: walks and talks, eats and sleeps, argues, shouts, drives his car, lies on the beach. But there’s no one home: He (meaning it) is actually a robot with a computer for a brain. On the outside he looks like any human being: This robot’s behavior and appearance are wonderfully sophisticated.
No evidence makes you doubt that your best friend is human, but suppose you did ask him: Are you human? Are you conscious? The robot could be programmed to answer no. But it’s designed to seem human, so more likely its software makes an answer such as, “Of course I’m human, of course I’m conscious!—talk about stupid questions. Are you conscious? Are you human, and not half-monkey? Jerk.”
So that’s a robot zombie. Now imagine a “human” zombie, an organic zombie, a freak of nature: It behaves just like you, just like the robot zombie; it’s made of flesh and blood, but it’s unconscious. Can you imagine such a creature? Its brain would in fact be just like a computer: a complex control system that makes this creature speak and act exactly like a man. But it feels nothing and is conscious of nothing.
Many philosophers (on both sides of the argument about software minds) can indeed imagine such a creature. Which leads them to the next question: What is consciousness for? What does it accomplish? Put a real human and the organic zombie side by side. Ask them any questions you like. Follow them over the course of a day or a year. Nothing reveals which one is conscious. (They both claim to be.) Both seem like ordinary humans.
So why should we humans be equipped with consciousness? Darwinian theory explains that nature selects the best creatures on wholly practical grounds, based on survivable design and behavior. If zombies and humans behave the same way all the time, one group would be just as able to survive as the other. So why would nature have taken the trouble to invent an elaborate thing like consciousness, when it could have got off without it just as well?
Such questions have led the Australian philosopher of mind David Chalmers to argue that consciousness doesn’t “follow logically” from the design of the universe as we know it scientifically. Nothing stops us from imagining a universe exactly like ours in every respect except that consciousness does not exist.
Nagel believes that “our mental lives, including our subjective experiences” are “strongly connected with and probably strictly dependent on physical events in our brains.” But—and this is the key to understanding why his book posed such a danger to the conventional wisdom in his field—Nagel also believes that explaining subjectivity and our conscious mental lives will take nothing less than a new scientific revolution. Ultimately, “conscious subjects and their mental lives” are “not describable by the physical sciences.” He awaits “major scientific advances,” “the creation of new concepts” before we can understand how consciousness works. Physics and biology as we understand them today don’t seem to have the answers.
On consciousness and subjectivity, science still has elementary work to do. That work will be done correctly only if researchers understand what subjectivity is, and why it shares the cosmos with objective reality.
Of course the deep and difficult problem of why consciousness exists doesn’t hold for Jews and Christians. Just as God anchors morality, God’s is the viewpoint that knows you are conscious. Knows and cares: Good and evil, sanctity and sin, right and wrong presuppose consciousness. When free will is understood, at last, as an aspect of emotion and not behavior—we are free just insofar as we feel free—it will also be seen to depend on consciousness.
The Iron Rod.
In her book Absence of Mind, the novelist and essayist Marilynne Robinson writes that the basic assumption in every variant of “modern thought” is that “the experience and testimony of the individual mind is to be explained away, excluded from consideration.” She tells an anecdote about an anecdote. Several neurobiologists have written about an American railway worker named Phineas Gage. In 1848, when he was 25, an explosion drove an iron rod right through his brain and out the other side. His jaw was shattered and he lost an eye; but he recovered and returned to work, behaving just as he always had—except that now he had occasional rude outbursts of swearing and blaspheming, which (evidently) he had never had before.
Neurobiologists want to show that particular personality traits (such as good manners) emerge from particular regions of the brain. If a region is destroyed, the corresponding piece of personality is destroyed. Your mind is thus the mere product of your genes and your brain. You have nothing to do with it, because there is no subjective, individual you. “You” are what you say and do. Your inner mental world either doesn’t exist or doesn’t matter. In fact you might be a zombie; that wouldn’t matter either.
Robinson asks: But what about the actual man Gage? The neurobiologists say nothing about the fact that “Gage was suddenly disfigured and half blind, that he suffered prolonged infections of the brain,” that his most serious injuries were permanent. He was 25 years old and had no hope of recovery. Isn’t it possible, she asks, that his outbursts of angry swearing meant just what they usually mean—that the man was enraged and suffering? When the brain scientists tell this story, writes Robinson, “there is no sense at all that [Gage] was a human being who thought and felt, a man with a singular and terrible fate.”
Man is only a computer if you ignore everything that distinguishes him from a computer.
The Closing of the Scientific Mind.
That science should face crises in the early 21st century is inevitable. Power corrupts, and science today is the Catholic Church around the start of the 16th century: used to having its own way and dealing with heretics by excommunication, not argument.
Science is caught up, also, in the same educational breakdown that has brought so many other proud fields low. Science needs reasoned argument and constant skepticism and open-mindedness. But our leading universities have dedicated themselves to stamping them out—at least in all political areas. We routinely provide superb technical educations in science, mathematics, and technology to brilliant undergraduates and doctoral students. But if those same students have been taught since kindergarten that you are not permitted to question the doctrine of man-made global warming, or the line that men and women are interchangeable, or the multiculturalist idea that all cultures and nations are equally good (except for Western nations and cultures, which are worse), how will they ever become reasonable, skeptical scientists? They’ve been reared on the idea that questioning official doctrine is wrong, gauche, just unacceptable in polite society. (And if you are president of Harvard, it can get you fired.)
Beset by all this mold and fungus and corruption, science has continued to produce deep and brilliant work. Most scientists are skeptical about their own fields and hold their colleagues to rigorous standards. Recent years have seen remarkable advances in experimental and applied physics, planetary exploration and astronomy, genetics, physiology, synthetic materials, computing, and all sorts of other areas.
But we do have problems, and the struggle of subjective humanism against roboticism is one of the most important.
The moral claims urged on man by Judeo-Christian principles and his other religious and philosophical traditions have nothing to do with Earth’s being the center of the solar system or having been created in six days, or with the real or imagined absence of rational life elsewhere in the universe. The best and deepest moral laws we know tell us to revere human life and, above all, to be human: to treat all creatures, our fellow humans and the world at large, humanely. To behave like a human being (Yiddish: mensch) is to realize our best selves.
No other creature has a best self.
This is the real danger of anti-subjectivism, in an age where the collapse of religious education among Western elites has already made a whole generation morally wobbly. When scientists casually toss our human-centered worldview in the trash with the used coffee cups, they are re-smashing the sacred tablets, not in blind rage as Moses did, but in casual, ignorant indifference to the fate of mankind.
A world that is intimidated by science and bored sick with cynical, empty “postmodernism” desperately needs a new subjectivist, humanist, individualistworldview. We need science and scholarship and art and spiritual life to be fully human. The last three are withering, and almost no one understands the first.
The Kurzweil Cult is attractive enough to require opposition in a positive sense; alternative futures must be clear. The cults that oppose Kurzweilism are called Judaism and Christianity. But they must and will evolve to meet new dangers in new worlds. The central text of Judeo-Christian religions in the tech-threatened, Googleplectic West of the 21st century might well be Deuteronomy 30:19: “I summon today as your witnesses the heavens and the earth: I have laid life and death before you, the blessing and the curse; choose life and live!—you are your children.”
The sanctity of life is what we must affirm against Kurzweilism and the nightmare of roboticism. Judaism has always preferred the celebration and sanctification of this life in this world to eschatological promises. My guess is that 21st-century Christian thought will move back toward its father and become increasingly Judaized, less focused on death and the afterlife and more on life here today (although my Christian friends will dislike my saying so). Both religions will teach, as they always have, the love of man for man—and that, over his lifetime (as Wordsworth writes at the very end of his masterpiece, The Prelude), “the mind of man becomes/A thousand times more beautiful than the earth/On which he dwells.”
At first, roboticism was just an intellectual school. Today it is a social disease. Some young people want to be robots (I’m serious); they eagerly await electronic chips to be implanted in their brains so they will be smarter and better informed than anyone else (except for all their friends who have had the same chips implanted). Or they want to see the world through computer glasses that superimpose messages on poor naked nature. They are terrorist hostages in love with the terrorists.
All our striving for what is good and just and beautiful and sacred, for what gives meaning to human life and makes us (as Scripture says) “just a little lower than the angels,” and a little better than rats and cats, is invisible to the roboticist worldview. In the roboticist future, we will become what we believe ourselves to be: dogs with iPhones. The world needs a new subjectivist humanism now—not just scattered protests but a growing movement, a cry from the heart.


The Authentic Reactionary



by
Nicholas Gomez-Davila

The existence of the authentic reactionary is usually a scandal to the progressive. His presence causes a vague discomfort. In the face of the reactionary attitude the progressive experiences a slight scorn, accompanied by surprise and restlessness. In order to soothe his apprehensions, the progressive is in the habit of interpreting this unseasonable and shocking attitude as a guise for self-interest or as a symptom of stupidity; but only the journalist, the politician, and the fool are not secretly flustered before the tenacity with which the loftiest intelligences of the West, for the past one hundred fifty years, amass objections against the modern world. Complacent disdain does not, in fact, seem an adequate rejoinder to an attitude where a Goethe and a Dostoevsky can unite in brotherhood.
But if all the conclusions of the reactionary surprise the progressive, the reactionary stance is by itself disconcerting. That the reactionary protests against progressive society, judges it, and condemns it, and yet is resigned to its current monopoly of history, seems an eccentric position. The radical progressive, on the one hand, does not comprehend how the reactionary condemns an action that he acknowledges, and the liberal progressive, on the other, does not understand how he acknowledges an action that he condemns. The first demands that he relinquish his condemnation if he recognizes the action’s necessity, and the second that he not confine himself to abstention from an action that he admits is reprehensible. The former warns him to surrender, the latter to take action. Both censure his passive loyalty in defeat.
The radical progressive and the liberal progressive, in fact, reprove the reactionary in different ways because the one maintains that necessity is reason, while the other affirms that reason is liberty. A different vision of history conditions their critiques. For the radical progressive, necessity and reason are synonyms: reason is the substance of necessity, and necessity the process in which reason is realized. Together they are a single stream of the standing-reserve of existence.
History for the radical progressive is not merely the sum of what has occurred, but rather an epiphany of reason. Even when reason indicates that conflict is the directional mechanism of history, every triumph results from a necessary act, and the discontinuous series of acts is the path traced by the steps of irresistible reason in advancing over vanquished flesh. The radical progressive adheres to the idea that history admonishes, only because the contour of necessity reveals the features of emergent reason. The course of history itself brings forth the ideal norm that haloes it.
Convinced of the rationality of history, the radical progressive assigns himself the duty of collaborating in its success. The root of ethical obligation lies, for him, in the possibility of our propelling history toward its proper ends. The radical progressive is inclined toward the impending event in order to favor its arrival, because in taking action according to the direction of history individual reason coincides with the reason of the world. For the radical progressive, then, to condemn history is not just a vain undertaking, but also a foolish undertaking.  A vain undertaking because history is necessity; a foolish undertaking because history is reason.
The liberal progressive, on the other hand, settles down in pure contingency. Liberty, for him, is the substance of reason, and history is the process in which man realizes his liberty. History for the liberal progressive is not a necessary process, but rather the ascent of human liberty toward full possession of itself. Man forges his own history, imposing on nature the errors of his free will. If hatred and greed drag man down among bloody mazes, the struggle is joined between perverted freedoms and just freedoms. Necessity is merely the dead weight of our own inertia, and the liberal progressive reckons that good intentions can redeem man, at any moment, from the servitude that oppresses him.
The liberal progressive insists that history conduct itself in a manner compatible with what reason demands, since liberty creates history; and as his liberty also engenders the causes that he champions, no fact is able to take precedence over the right that liberty establishes. Revolutionary action epitomizes the ethical obligation of the liberal progressive, because to break down what impedes it is the essential act of liberty as it is realized. History is an inert material that a sovereign will fashions. For the liberal progressive, then, to resign oneself to history is an immoral and foolish attitude. Foolish because history is freedom; immoral because liberty is our essence.
The reactionary is, nevertheless, the fool who takes up the vanity of condemning history and the immorality of resigning himself to it. Radical progressivism and liberal progressivism elaborate partial visions. History is neither necessity nor freedom, but rather their flexible integration. History is not, in fact, a divine monstrosity. The human cloud of dust does not seem to arise as if beneath the breath of a sacred beast; the epochs do not seem to be ordered as stages in the embryogenesis of a metaphysical animal; facts are not imbricated one upon another as scales on a heavenly fi sh. But if history is not an abstract system that germinates beneath implacable laws, neither is it the docile fodder of human madness. The whimsical and arbitrary will of man is not its supreme ruler. Facts are not shaped, like sticky, pliable paste, between industrious fingers.
 In fact, history results neither from impersonal necessity nor from human caprice, but rather from a dialectic of the will where free choice unfolds into necessary consequences. History does not develop as a unique and autonomous dialectic, which extends in vital dialectic the dialectic of inanimate nature, but rather as a pluralism of dialectical processes, numerous as free acts and tied to the diversity of their fleshly grounds.
If liberty is the creative act of history, if each free act produces a new history, the free creative act is cast upon the world in an irrevocable process. Liberty secretes history as a metaphysical spider secretes the geometry of its web. Liberty is, in fact, alienated from itself in the same gesture in which it is assumed, because free action possesses a coherent structure, an internal organization, a regular proliferation of sequelae. The act unfolds, opens up, and expands into necessary consequences, in a manner compatible with its intimate character and with its intelligible nature. Every act submits a piece of the world to a specific configuration.
History, therefore, is an assemblage of freedoms hardened in dialectical processes. The deeper the layer whence free action gushes forth, the more varied are the zones of activity that the process determines, and the greater its duration. The superficial, peripheral act is expended in biographical episodes, while the central, profound act can create an epoch for an entire society. History is articulated, thus, in instants and epochs: in free acts and in dialectical processes. Instants are its fleeting soul, epochs its tangible body. Epochs stretch out like distances between two instants: its seminal instant, and the instant when the inchoate act of a new life brings it to a close. Upon hinges of freedom swing gates of bronze. Epochs do not have an irrevocable duration: the encounter with processes looming up from a greater depth can interrupt them; inertia of the will can prolong them. Conversion is possible, passivity ordinary. History is a necessity that freedom produces and chance destroys.
Collective epochs are the result of an active complicity in an identical decision, or of the passive contamination of inert wills; but while the dialectical process in which freedoms have been poured out lasts, the freedom of the nonconformist is twisted into an ineffectual rebellion. Social freedom is not a permanent option, but rather an unforeseen auspiciousness in the conjunction of affairs. The exercise of freedom supposes an intelligence responsive to history because confronting an entire society alienated from liberty, man can only lie in wait for the noisy crackup of necessity. Every intention is thwarted if it is not introduced into the principal fissures of a life.
In the face of history ethical obligation to take action only arises when the conscience consents to a purpose that momentarily prevails, or when circumstances culminate in a conjunction propitious to our freedom. The man whom destiny positions in an epoch without a foreseeable end, the character of which wounds the deepest fibers of his being, cannot heedlessly sacrifice his repugnance to his boldness, nor his intelligence to his vanity. The spectacular, empty gesture earns public applause, but the disdain of those governed by reflection. In the shadowlands of history, man ought to resign himself to patiently undermining human presumption. Man is able, thus, to condemn necessity without contradicting himself, although he is unable to take action except when necessity collapses.
If the reactionary concedes the fruitlessness of his principles and the uselessness of his censures, it is not because the spectacle of human confusion suffices for him. The reactionary does not refrain from taking action because the risk frightens him, but rather because he judges that the forces of society are at the moment rushing headlong toward a goal that he disdains. Within the current process social forces have carved their channel in bedrock, and nothing will turn their course so long as they have not emptied into the expanse of an unknown plain. The gesticulation of castaways only makes their bodies float along the further bank. But if the reactionary is powerless in our time, his condition obliges him to bear witness to his revulsion. Freedom, for the reactionary, is submission to a mandate. 
In fact, even though it be neither necessity nor caprice, history, for the reactionary, is not, for all that, an interior dialectic of the immanent will, but rather a temporal adventure between man and that which transcends him. His labors are traces, on the disturbed sand, of the body of a man and the body of an angel. History for the reactionary is a tatter, torn from man’s freedom, fluttering in the breath of destiny. The reactionary cannot be silent because his liberty is not merely a sanctuary where man escapes from deadening routine and takes refuge in order to be his own master. In the free act the reactionary does not just take possession of his essence. Liberty is not an abstract possibility of choosing among known goods, but rather the concrete condition in which we are granted the possession of new goods. Freedom is not a momentary judgment between conflicting instincts, but rather the summit from which man contemplates the ascent of new stars among the luminous dust of the starry sky. Liberty places man among prohibitions that are not physical and imperatives that are not vital. The free moment dispels the unreal brightness of the day, in order that the motionless universe that slides its fleeting lights over the shuddering of our flesh, might rise up on the horizon of the soul.

If the progressive casts himself into the future, and the conservative into the past, the reactionary does not measure his anxieties with the history of yesterday or with the history of tomorrow. The reactionary does not extol what the next dawn must bring, nor is he terrified by the last shadows of the night. His dwelling rises up in that luminous space where the essential accosts him with its immortal presence. The reactionary escapes the slavery of history because he pursues in the human wilderness the trace of divine footsteps. Man and his deeds are, for the reactionary, a servile and mortal flesh that breathes gusts from beyond the mountains. To be reactionary is to champion causes that do not turn up on the notice board of history, causes where losing does not matter. To be reactionary is to know that we only discover what we think we invent; it is to admit that our imagination does not create, but only lays bares smooth bodies. To be reactionary is not to espouse settled cases, nor to plead for determined conclusions, but rather to submit our will to the necessity that does not constrain, to surrender our freedom to the exigency that does not compel; it is to find sleeping certainties that guide us to the edge of ancient pools. The reactionary is not a nostalgic dreamer of a canceled past, but rather a hunter of sacred shades upon the eternal hills.


Nicolás Gómez Dávila (1913–1994) was a reclusive Colombian literary figure who, in the last years of his life, began to garner recognition as one of the most penetrating conservative thinkers of the twentieth century. The scion of an upper-class Bogotá family, he was educated by private tutors in Paris, where prolonged convalescence after an illness ignited a passion for classical literature. While he never attended university, his personal library would grow to more than 30,000 volumes. His reputation in Colombia was such that after the collapse of the military dictatorship in 1958 he was repeatedly offered significant political appointments, which he always refused. Gómez Dávila’s mordant critique of modernity was expressed almost entirely in books of aphorisms, which touch on philosophical, theological, political, and aesthetic themes. He sought to limn a “reactionary” perspective distinct from both the conventional Left and the conventional Right. But he made no effort to promote his intellectual work: indeed, his first book was published in a private edition of only 100 copies, which were presented as gifts to friends. His international reputation spread by word of mouth. Virtually nothing of his work is yet available in English, beyond a small sample of aphorisms on various websites. However, his complete works have been published in a German translation, prompting sustained engagement with his thought in Central Europe. Significant translations have also been undertaken in French, Italian, and Polish. The essay below, “El reaccionario auténtico,” originally appeared in Revista Universidad de Antioquia 240 (April-June 1995), 16–19. It is Gómez Dávila’s most sustained attempt to explain his own unique intellectual position, that of an “authentic reactionary.” —MCH WINTER 2010