Foer then taskes a turn to the philosophical antecedents of computational science as promulgated by René Descartes. “I am a thinking thing that can exist without a body,” Descartes wrote. Now “Protestant” thinkers such a Sam Harris translate that into such statements as “Intelligence is information processing.” The thing that characterizes the age of Descartes, which hasn’t really ended, unfortunately, is this”scouring out” of the corporeal self, engendering a deepening split between the realms of thinking and feeling, with feeling relegated to some backwater that has no bearing on our evolutionary advance. Then Foer jumps to Alan Turing, with his infamous “Turing Test” in which behavior counts for reality–if it looks like an intelligent entity, then it is an intelligent entity. You are your behavioral artifacts. “What it’s like” to be a thing–why should we pay any attention to that? Thus the battle lines are drawn, and there is no compromise possible between them. Naturwissenschaft vs. Geisteswissenschaft. Can’t you feel how close we are to the ultimate showdown? Foer: “Engineering is considered the paragon of rationality–a profession devoted to systems and planning, the enemy of spontaneity and instinct.” The enemy. We must determine who is friend and who is enemy. That is the science of politics, according to Carl Schmitt. Very dangerous way of approaching the problem. One looks so hard for common ground. But what is the common ground between Naturwissenschaft and Giesteswissenschaft? One could aver, one must integrate them. All well and good. But the scientific spirit is a totalizing one. It will not be satisfied as long as any remnant of the subjective is left standing. An interesting rebuttal of Turing is found in an article by John Searle, in his notorious Chinese Room Argument.
THE CHINESE ROOM ARGUMENT
Suppose that artifical intelligence research has succeeded in constructing a computer which behaves as if it understands Chinese. It takes Chinese symbols as input, consults a large look-up table (as all computers can be described as doing), and then produces other Chinese symbols as output. Suppose that this computer performs this task so convincingly that it passes the Turing test. In other words, it convinces a human Chinese speaker that it is another human Chinese speaker. All the questions the human asks are responded to appropriately, such that the Chinese speaker is convinced that he or she is talking to another Chinese speaker. The conclusion proponents of strong AI would like to draw is that the computer understands Chinese, just as the person does. Suppose further that an operator is sitting inside a closed room possessing the necessary materials to perform all pertinent character-manipulation operations. The operator receives Chinese symbols, looks them up in tables, and returns the correct Chinese symbols, solely by using the relevant materials. Knowledge of Chinese is completely irrelevant to the question of efficacy in providing the correct output. Is there any difference between this human operator’s processes and those of a computer? The many attempts at objection to this simple argument—Syntax cannot suffice for semantics—are themselves highly problematic.
Such confusions are inevitable in a world where engineers and computer scientists basically call the shots.
Foer then takes up the problem of the algorithm. “The algorithm was developed to automate thinking [italics mine-dw], to remove difficult decisions from the hands of humans, to settle contentious debates.” Erupting like magma made of ice from the volcanic mind of Gottfried Leibniz (1646-1716), the algorithm itself issued from Leibniz’s calculus. In the development of his calculus, he imagined a sort of “alphabet of human thought.” To create this “alphabet”, one takes as data the incontestably true facts about the world. Each discrete entity in this category, which Leibniz called the “primitives”, would be assigned a numerical value. These values would then form the basis of a new calculus of thought, the “calculus ratiocinator”. The nonsense gathers quickly from this point. Leibniz says, let us assign numerical values to these data. One such datum is “animal”, another “rationality”. Rational times animal equals Man. And a further reduction: “Animal” is give the value of 2. “Rational” is given the value of 3. 2 x 3 = 6. Then, Leibniz, genius of the nascent 18th century, asks a deep question: Are all men monkeys? Since the value for monkey is 10 (anyone can see that this is self-evident), which cannot be divided evenly by 6, there is no element of monkey in man. Case closed. So easy! What an elegant proof that all knowledge can be derived from computation. I guess engineers don’t really accept this sort of reasoning today. But generally speaking the reduction of complex features of the world, and especially human emotional realities, to numerically-based computation proceeds apace.
Next, Foer digs into the current state of internet algorithm application. I will continue next time with some of the little gems he discusses in this connection.