The Error of the Computational Model

Today I want to discuss what seems to me a fundamental problem with the computational model of the mind.  I know I promised that this wasn’t going to be ALL about philosophy, and that’s still true, but this has been what’s on my mind today.  

Let me start by defining some key terms for those of you that are unfamiliar with these ideas.  First, we need to define the term, consciousness.  Ned Block, a contemporary philosopher who wrote, “On a Confusion about a Function of Consciousness,” distinguished four important ways in which we can talk about it.  For our purposes this evening, and, generally, for philosophers, what we’re most interested in with respect to this topic is what we call phenomenal consciousness.  Phenomenal consciousness is conscious experience or the subjective experience of sensation or sensory stimulations.  Sometimes we philosophers call these things qualia, which is short for qualitative experience.  For instance, when we’re enjoying a glass of wine, it has a characteristic taste that not even the most eloquent of vocabulary can seem to capture accurately.  We like to think of this experience as pre-linguistic to the extent that, even in the absence of language, we would still be able to have this experience.  Language certainly enriches the experience, but it is not necessary to have the experience at all.  The experience of any sensation is a qualitative experience for some subject that it is affecting.  We also have qualitative experiences of emotions and thoughts.  In other words, there is some way it feels (subjectively speaking) to have such experiences or be in such states.  Having defined that, I will be using consciousness and phenomenal consciousness as one and the same throughout the rest of this post.  

Second, a computational model of the mind is one that subscribes to a functional theory of consciousness.  There are many variations a functional theory can have, but the basic idea is the same.  Consciousness is a matter of responding or having a disposition to respond to environmental stimuli in the right way.  Somebody like Paul Churchland, for instance, seems to believe that consciousness is simply a package term used to describe the mediation of sensory inputs with behavioral outputs by way of processes of practical reasoning.  To use an example, when I receive an injection from the doctor (sensory input), I evaluate the degree of the pain and what I am able to do concerning it (process of practical reasoning), and then I wince in response to my evaluation (behavioral output).  Obviously, this is something that can happy very quickly such that I don’t even realize I do this some of the time.  

The consequence of a computational model of mind is that the substratum in which (out of which?) consciousness is instantiated is inconsequential.  If we believe this, we say that consciousness is a multiply realizable property.  Multiple realizability means that some property can be created across different mediums.  For this reason, such theorists often accept what John Searle calls Strong Artificial Intelligence theory.  The idea behind Strong Artificial Intelligence is that it is possible that a computer program can actually count as an instantiation of consciousness.  This is to be contrasted with Weak Artificial Intelligence theory, where one believes that a computer program will never actually be conscious, but it can act as a model by which we can better understand how minds might work.  

One problem that is occurring to me is that I’m sitting here wondering if there is any property that is multiply realizable though.  I was going to try to defend and explain this basic idea for a moment by invoking something like color, but even in the instance of color, it is not multiply realizable.  I was thinking, “Red can be produced in paint, flowers, liquid crystal displays, etc.,” but this isn’t true.  While the chemical makeup of each of these is radically different from the other, red is still the product of the stimulation of the retina by a particular electromagnetic wavelength (~650 nm).  It already doesn’t look good for a computational model because now you will have to explain why consciousness would be the only property that is multiply realizable.  Maybe behavioral reactions are multiply realizable, but it depends on how we parse our understanding of behavior.  Hmm…

Three, I want to use for a conditional argument Leibniz’s Identity of Indiscernibles.  To put it as simply as I can, the principle argues: if object A and object B are alike with respect to their properties in every way, then object A is object B.  I also want to use in my conditional argument the idea that there are categories, types, or forms of things.  Allowing me both of these will permit me to claim that types of things have corresponding sets of properties, and that if any two sets of properties are alike in every (relevant) respect, then the two types are identical.  I don’t think this is a particularly controversial claim to make.  I think that much of the scientific enterprise, at least in biology, operates on these principles.  When we discover an organism, we catalog its features and classify it accordingly.  When we discover another organism, we do the same thing. If both of these organisms share the same sets of properties, even though they are two different instantiations, we acknowledge that they are the same type.  It follows from this, conversely, that if they are of the same type, then they will have the same sets of properties.  Thus, we know that if we find an organism of type T, it will have the properties associated with type T.  We can consider these properties as things that the instantiation has the capacity to have, but it need not have all of them necessarily (as in the case of a mutation or an injury).  The important point here is the apparent absurdity that results if these principles are violated, for it seems that it would be possible for two things to have the same sets of properties and yet be of two different types.  No example of anything like this comes to mind and it seems like nonsense to say, “These two things are alike with respect to all of their properties, but they are not of the same type.”  

Proceeding with the argument, we can start with the observation that consciousness is a property of a type (particular kinds of biological organisms).  Biological organisms do not have the same properties as computer programs.  Therefore, biological organisms are not of the same type as computer programs.  If they are not of the same type, then they will not have the same sets of properties.  

I think the claims so far are not terribly controversial.  Most of us, I think, would agree that biological organisms are not of the same type as computer programs and, consequently, would not share all of the same properties.  Now, the question here is whether or not properties are unique to types.  If they are, then a computer program cannot possibly instantiate consciousness.  Can I add this to my argument?  One counterexample appears to regard precisely the topic at hand.  After all, one might argue, “I can compose and utter sentences and certainly I can create a computer program that can compose and utter sentences.  Even if we’re not of the same type, do we not share at least this property?  And if we can share one property, couldn’t we share another, such as consciousness?”  To this, there are two possible replies.  One could respond that the example you cite is behavioral, but, she would argue, consciousness is not a behavior, it is a property, and two types cannot share any property though they may mimic behaviors.  I’m sympathetic with that sort of answer, but I think the better response goes back to my critique above of multiple realizability.  Even if consciousness is merely behavioral, depending on how we understand what constitutes a particular behavior, the objection might not work.  What is a behavior?  Where does it start?  Where does it stop?  What do we take into account?  In our composition and utterance of a sentence, do we factor in muscle contractions, electro-chemical activity, expiration from the lungs, etc.?  If so, the two behaviors cannot possible be the same even though there is an appearance of similarity.  That would lead us to accept Weak Artificial Intelligence Theory.

What is the alternative to the computational model?  The biological model of the mind, according to which consciousness is a feature of the biological makeup of certain kinds of organisms, just like respiration, metabolism, etc., are features of biological organisms.  John Searle defends this kind of theory.  I wouldn’t consider myself a “Searlean” though.  In fact, prior to this past week, I was somewhat on-the-fence with this issue.  These thoughts occurred to me today though and seemed worth exploring.  If anything is unclear, (1) I haven’t had enough time to sketch it out and am posting it in order to write it down and (2) I am writing at the end of a long day.  

I acknowledge that the argument cannot be considered anything demonstrative by any means and relies on accepting a few conditionals.  To that end, perhaps the title of this post would be better if it had read, “A Possible Error of the Computational Model.”  However, I think that they are fairly defensible and I think that makes the conclusion likely.  Still, I am certainly open to be convinced otherwise.  

What are your thoughts?  

     

     

Some Further Reading:

Block, Ned. “On a Confusion about a Function of Consciousness.” Behavioral and Brain Sciences 18 (1995): 227-47. Print.

Chalmers, David John. The Conscious Mind: In Search of a Fundamental Theory. New York: Oxford UP, 1996. Print.

Dennett, Daniel Clement. Consciousness Explained. Boston: Little, Brown and, 1991. Print.

——-. Sweet Dreams: Philosophical Obstacles to a Science of Consciousness. Cambridge, MA: MIT, 2005. Print.

Forrest, Peter. “The Identity of Indiscernibles.” Stanford Encyclopedia of Philosophy. 15 Aug. 2010. Web. 13 Apr. 2012. <http://plato.stanford.edu/entries/identity-indiscernible/>.

Leibniz, Gottfried Wilhelm, and Leroy E. Loemker. Philosophical Papers and Letters;. Chicago: University of Chicago, 1956. Print.

Minsky, Marvin Lee. The Emotion Machine: Commonsense Thinking, Artificial Intelligence, and the Future of the Human Mind. New York: Simon & Schuster, 2006. Print.

Nagel, Thomas. “What Is It Like to Be a Bat?” Philosophical Review 83 (1974): 435-50. Print.

Searle, John R., Daniel Clement Dennett, and David John Chalmers. The Mystery of Consciousness. New York: New York Review of, 1997. Print.

Searle, John R. Mind: A Brief Introduction. Oxford: Oxford UP, 2004. Print.


Recent comments

Blog comments powered by Disqus