Perks of the “Enlightenment”–Cripples Walk
In the “enlightenment”, in fact the claim to enlightenment is the idea that we will now focus on the stuff that really matters…the mechanical the manipulation of matter to for technology's sake, leaving behind our concern for abstract non 'empirically verifiable concepts such as 'the good'. It was believed, more and more often that all things that take place everywhere even our experience itself is totally explainable in terms of the matter underlying to the process…even mental activity.
Before I get off on too much of a rant, the purpose of this post is to show one of the wonderful perks of the “enlightenment”—technology is growing incredibly fast, though the world without certain goods can still grow incredibly dark. As explained in our previous post on transhumanism, who needs religious or miraculous healers when the empirical sciences (we) can do it for ourselves? Be proud humanists, we will conquer the universe!
No but seriously, this is pretty cool.
Ecce!
That's the main part anyways, but I wanted to reblog a post recently written by Prof. Feser for those that want a more in depth commentary on materialism and consciousness:
That the brain is a digital computer and the mind the software run on the computer are theses that seem to many to be confirmed by our best science, or at least by our best science fiction. But we recently looked at some arguments from Karl Popper, John Searle, and others that expose serious (indeed, I would say fatal) difficulties with the computer model of the mind. Saul Kripke presents another such argument. It is not well known. It was hinted at in a footnote in his famous book Wittgenstein on Rules and Private Language (WRPL) and developed in some unpublished lectures. But Jeff Buechner’s recent article “Not Even Computing Machines Can Follow Rules: Kripke’s Critique of Functionalism” offers a very useful exposition of Kripke’s argument. (You can find Buechner’s article in Alan Berger’s anthology Saul Kripke.)Though it is, I think, not essential to Kripke’s argument, the “quus” paradox developed in WRPL provides a helpful way of stating it (and, naturally, is made use of by Kripke in stating it in WRPL). So let’s briefly take a look at that. Imagine you have never computed any numbers as high as 57, but are asked to compute “68 + 57.” Naturally, you answer “125,” confident that this is the arithmetically correct answer, but confident also that it accords with the way you have always used “plus” in the past, i.e. to denote the addition function, which, when applied to the numbers you call “68” and “57,” yields 125. But now, Kripke says, suppose that an odd skeptic asks you how you are so sure that this is really what you meant in the past, and thus how you can be certain that “125” is really the correct answer. Maybe, he suggests, the function you really meant in the past by “plus” and “+” was not addition, but rather what Kripke calls the “quus” function, which he defines as follows:x quus y = x + y, if x, y < 57;= 5 otherwise.So, maybe you have always been carrying out “quaddition” rather than addition, since quadding and adding will always yield the same result when the numbers are smaller than 57. That means that now that you are computing “68 + 57,” the correct answer should be “5” rather than “125.” And maybe you think otherwise only because you are now misinterpreting all your previous uses of “plus.” Of course, this seems preposterous. But how do you know the skeptic is wrong? [Well….]Kripke’s skeptic holds that any evidence [Keyword, what type?] you have that what you always meant was addition is evidence that is consistent with your really having meant quaddition. For example, it is no good to note that you have always said “Two plus two equals four” and never “Two quus two equals four,” because what is in question is what you meant by “plus.” Perhaps, the skeptic says, every time you said “plus” you meant “quus,” and every time you said “addition” you meant “quaddition.” Neither will it help to appeal to memories of what was consciously going through your mind when you said things like “Two plus two equals four.” Even if the words “I mean plus by ‘plus,’ and not ‘quus’!” had passed through your mind, that would only raise the question of what you meant bythat.Note that it is irrelevant that most of us have in fact computed numbers higher than 57. For any given person, there is always some number, even if an extremely large one, equal to or higher than which he has never calculated, and the skeptic can always run the argument using that number instead. Notice also that the point can be made about what you mean now by “plus.” For all of your current linguistic behavior and the words you are now consciously running through your mind, the skeptic can ask whether you mean by it addition or quaddition.
Now, Kripke’s “quus” puzzle famously raises all sorts of questions in the philosophy of language and philosophy of mind. This is not the place to get into all that, and Kripke’s argument against functionalism does not, I think, stand or fall with any particular view about what his “quus” paradox ultimately tells us about human thought and language. [If you have been asking yourself, why is he talking about this? Now he is getting to the point.] The point for our purposes is that the “quus” example provides a useful illustration of how material processes can be indeterminate between different functions. (An Aristotelian-Thomistic philosopher like myself, by the way, is happy to allow that mental imagery — such as the entertaining of visual or auditory mental images of words like “plus” or sentences like “I mean plus, not quus!” — is as material as bodily behavior is. From an A-T point of view, among the various activities often classified by contemporary philosophers as “mental,” it is only intellectual activity in the strict sense — activity that involves the grasp of abstract concepts, and is irreducible to the entertaining of mental images — that is immaterial. And that is crucial to understanding how an A-T philosopher would approach Kripke’s argument. But again, that is a topic for another time.)Kripke’s “quus” example can be used to state his argument about computationalism as follows. Whatever we say about what we mean when we use “plus,” there are no physical features of a computer that can determine whether it is carrying out addition or quaddition, no matter how far we extend its outputs. No matter what the past behavior of a machine has been, we can always suppose that its next output — “5,” say, when calculating numbers larger than any it has calculated before — might show that it is carrying out something like quaddition rather than addition. Of course, it might be said in response that if this happens, that would just show that the machine was malfunctioning rather than performing quaddition. But Kripke points out that whether some output counts as a malfunction itself depends on what program the machine is running, and whether the machine is running the program for addition rather than quaddition is precisely what is in question.Another way to put the point is that the question of what program a machine is running always involves idealization. In any actual machine, gears get stuck, components melt, and in myriad other ways the machine fails perfectly to instantiate the program we say it is running. But there is nothing in the physical features or operations of the machine themselves that tells us that it has failed perfectly to instantiate its idealized program. For relative to an eccentric program, even a machine with a stuck gear or melted component could be doing exactly what it is supposed to be doing, and a gear that doesn’t stick or a component that doesn’t melt could count as malfunctioning. Hence there is nothing in the behavior of a computer, considered by itself, that can tell us whether its giving “125” in response to “What is 68 + 57?” counts as an instance of its following an idealized program for addition, or instead as a malfunction in a machine that is supposed to be carrying out an idealized program for quaddition. And there is nothing in the behavior of a computer, considered by itself, that could tell us whether giving “5” in response to “What is 68 + 57?” counts as a malfunction in a machine that is supposed to be carrying out an idealized program for addition, or instead as an instance of properly following an idealized program for quaddition.As Buechner points out, it is no good to appeal to counterfactuals to try to get around the problem — to claim, for example, that what the machine would have done had it not malfunctioned is answer “125” rather than “5.” For such a counterfactual presupposes that the idealized program the machine is instantiating is addition rather than quaddition, which is precisely what is in question.Naturally, we could always ask the programmer of the machine what he had in mind. But that simply reinforces the point that there is nothing in the physical properties of the machine itself that can tell us. [Exactly. Including in this case above with an attached robotic arm. The translation of biological output into useful electronic/magnetic input for the machine contains no semblance of intelligence and I dare say this would even be the case with a simulated brain, if there could be such a thing. It would only do what it was created to do, rather than be able to have creativity/freedom.] But if there is nothing intrinsic to computers in general that determines what programs they are running, neither is there anything intrinsic to the human brain specifically, considered as a kind of computer, that determines what program it is running (if it is running one in the first place). Hence there can be no question of explaining the human mind in terms of programs running in the brain.Might we appeal to God as the programmer of the brain who determines which program it is running? Obviously most defenders of the computer model of the mind would not want to do this, since they tend to be materialists and materialists tend to be atheists. But it is not a good idea in any case. For that would make of human thought something as extrinsic to human beings as the program a computer is running is extrinsic to a computer, indeed as extrinsic as the meaning of a sentence is to the sentence. Just as the meaning of “The cat is on the mat” is not really in the sounds, ink marks, or pixels in which the sentence is realized, but rather in the mind of the user or hearer of the sentence, so too the idea of God as a kind of programmer or user of the brain qua computer would entail that the meanings of our thought processes are not really in us at all but only in Him. The result would be a new riff on occasionalism that is even more bizarre than the usual kind — a version on which it is really God who is, strictly speaking, doing all our thinking for us!Neither, as Buechner points out, will it do to suggest that natural selection has determined that we are following one program rather than another. For any program we conjecture natural selection has put into us, there is going to be an alternative program with equal survival value, and the biological facts will be indeterminate between them. There will be no reason in principle to hold that it is the one program that natural selection put into us rather than the other.Suppose we say instead that there is what Buechner calls a “telos in Nature” that determines that the brain really is following this program rather than that — the program for addition, say, rather than quaddition? In that case we would have some end or purpose intrinsic to the natural world that determines which program the brain instantiates, which would eliminate the occasionalist problem the appeal to God as programmer raised. (Of course, you could give a Fifth Way style argument for God as the ultimate explanation of this intrinsic telos, but that would not be to make of God a “programmer” in the relevant sense, any more than Aquinas’s Fifth Way makes of God a Paley-style tinkerer.)Buechner himself is not sympathetic to this “telos in Nature” suggestion, but it is, naturally, one that an Aristotelian is bound to take seriously. But it does not help the advocate of the computer model of the mind, at least not if he is a materialist. For to affirm that there is teleology intrinsic in nature is just to abandon the materialist’s conception of matter and return to something like the Aristotelian-Scholastic conception that materialists, like other modern philosophers, thought they had buried forever back in the days of Hobbes and Descartes.Still, if the computer model of the mind leads people to reconsider Aristotelianism, it can’t be all bad. (Cf. James Ross’s “The Fate of the Analysts: Aristotle’s Revenge: Software Everywhere”)
Well said.
When it comes to explaining why the materialist account of consciousness cannot work, it's hard to beat John Searle's Chinese Room argument.