A Subsymbolic Computationalist Reply to the Chinese Room

Ken Marable
May 1995
Presented at the 1995 Annual Meeting of the Michigan Academy of Science, Arts, and Letters.

Contents

Abstract
  1. Introduction
  2. The Chinese Room
  3. Terminology
  4. Symbolic v. subsymbolic computationalism
  5. Clarifying the Chinese Room
  6. The Chinese Room and symbolic computationalism
  7. The Chinese Room and subsymbolic computationalism
  8. Where Chalmers goes wrong
  9. Reworking the subsymbolic computational reply
  10. Is this still computationalism?
  11. Conclusion
Bibliography
Footnotes

Abstract

For well over a decade computationalists have been responding to John Searle's Chinese Room argument with varying success. My focus is one such reply recently presented by David Chalmers. He claims that the Chinese Room argument only works against the traditional, symbolic form of computationalism. He argues that subsymbolic computationalism is immune to the Chinese Room dues to its distributed representational manner. This difference between the semantical interpretations of the two computational variants is the crux of what saves subsymbolic computationalism, according to Chalmers. However, I feel that Chalmers' argument, after some clarifying, is on the right direction but still faulty. I offer a slight reworking of Chalmers' view that retains his original intentions, but is able to circumvent the Chinese Room's influence.

1. Introduction

Few single arguments in the philosophy of mind are as enduring and discussed as John Searle's Chinese Room (1980). Perhaps one of the reasons for this continuing influence is its strong claims. Searle believes that he refutes the entire artificial intelligence endeavor; that no computer program of any sort could ever become a mind. Artificial intelligence studies are only useful as a mere tool for better understanding our minds and nothing more. These bold claims have sparked much debate, especially from among the computationalists, who stand in complete opposition to Searle's view.

One recent objection by David Chalmers (1992) is particularly interesting, in my opinion. He believes that the Chinese Room argument is not as all-encompassing as Searle claims. Chalmers makes a distinction between symbolic and subsymbolic computationalism (explained in section 4) and argues that this distinction is very relevant to the issue. Through some of its special characteristics, Chalmers believes that subsymbolic computationalism can avoid the Chinese Room argument. Instead of arguing that subsymbolic computationalism definitely does have meaning and explaining exactly how it might do so, he merely claims that it can avoid one particular argument (albeit a powerful and popular argument) and therefore create the possibility that this form of computationalism may have meaning.

I feel that Chalmers' argument does raise a strong objection against the Chinese Room. However, there are problems with the direction in which Chalmers carries his objection. With some minor modifications, I feel that his objection becomes quite a powerful crack in the Chinese Room's influence. Perhaps Searle's arguments are not as overwhelming and encompassing as he might like to think. I will start off by outlining Searle's original Chinese Room argument and then clarify some of it's ambiguous terminology. Then after laying out the two computationalist positions, I will present a clarified version of the Chinese Room. I will then present how the clarified Chinese Room applies to symbolic computationalism, and then Chalmers' position on how it applies to subsymbolic computationalism. Then I will point out where Chalmers runs into trouble. Lastly, I will proceed to rework Chalmers' argument that does seem to refute the Chinese Room's argument against Subsymbolic Computationalism.

2. The Chinese Room

Searle's original thought experiment of the Chinese Room argument was rich with intuitions, however Searle has often been criticized for just how intuitional the argument is. Daniel Dennet has even gone so far as to saying that these intuitions "are defective; they do not enhance but mislead our imaginations" (Dennet p. 440, 1991). Searle has also presented a variation of it in a more clearly stated form which is much easier to discuss. This restatement is in the form of three axioms and his conclusion (Searle 1984, p. 39).

1) Syntax is not sufficient for semantics.
2) Minds have content, specifically, they have semantic content.
3) Computer programs are entirely defined by their formal syntactical structure.
Conclusion: Instantiating a program by itself is never sufficient for having a mind.
However, even this reformulation relies on intuition, although to a lesser degree than before. This is mostly through the wording that he uses. Searle states his argument in very simple terms. With the complexity and fine distinctions in this area, though, simple terms are not good enough.

I will clarify these terms and develop a finer-tuned argument in just a moment. First, let's take a quick look at those intuitions, which will help us to get a general understanding of the argument before we get into the nitty-gritty details. Searle is basically saying that computer programs operate only by rules that are syntactic, and that syntactic rules are unable to endow those symbols with meaning. In other words, computer programs are simply a collection of rules where any meaning that the variables (the computers' symbols) may or may not have do not matter in the least to the program. It just computes away on these variables that could mean anything, or nothing at all. For example, a computer program may involve the variable ELEPHANT, but all that the program is concerned with is a binary string of 1's and 0's. We, the programmers, are the ones who decide that it corresponds to the concept elephant. Since there are no other avenues open for computer programs that Searle sees as relevant, they cannot have any meaning. This is a very simple overview of the Chinese Room argument, but should at least get us thinking in the general direction Searle wants us to.

3. Terminology

So now that we have laid out the basis of Searle's argument, let's dive in and clear up some of the wording. The term computational system deserves some clarification. What exactly does separate one system from another? I feel that a good definition of a computational system is a series or group of processes that follow a given set of rules. This leads to the characteristics of a particular system being defined by the rules that govern it. So now the characteristics of the rules themselves become integral to the nature of the system as a whole.

The idea of computational tokens are often misinterpreted, as well. First off, a token of any sort is the thing that the rules operate over. Roughly, grammatical rules operate over words, and mathematical rules operate over numbers. Computational tokens are the base objects that the computer program uses to compute. They are the base variables and/or nodes of the program or network. What they physically are depends on the particular computer model used to instantiate them, but for philosophical purposes they are the basic things that the rules use and operate by.

I wish to clarify tokens just a step further. The term is used sometimes to refer to only the meaningless objects that the rules operate over, and sometimes as objects that may or may not have meaning. Within this paper, I will refer to the meaningless objects or aspects of objects as tokens and the (potentially) meaningful objects or aspects of objects as representations. In other words, the computational rules operate over tokens which may or may not be representations, depending on what kind of computationalist you are.

Searle relies heavily upon the language of syntax and semantics. These terms are quite vague, even in their native discipline of linguistics. When specifically defined, they do become a useful shorthand as Searle uses them. The only problem is that Searle does not specifically define them. Syntax, concerning this issue, refers to any rules that operate with no reference to the (internal) meaning of its tokens. They operate using the tokens as meaningless objects, just as the grammar of a language is not specific about the meaning of the words.

Semantics is the meaning that Searle is searching for. Searle regards the presence of meaning as a necessary and sufficient condition for the existence of a mind. However, meaning can pop up in many guises. The one Searle uses is internal meaning. This is the meaning that is inherent to the system. It is self-evident without any outside influence whatsoever. More meaning may or may not arise from that outside influence, but it is all grounded in the internal meaning of the system. This is to be distinguished from external meaning which is created solely by outside forces, interpretive meaning which does bring the system itself into the picture but is still dependant upon outside forces, and original meaning that is built into the system from the start.1

4. Symbolic v. Subsymbolic Computationalism

Now that we have the terminology cleared up, let's clarify Searle's main target, or targets as the case may be. Computationalism is the theory that a purely computational system can give rise to a mind. In other words, a mind can be captured algorithmically in a computer program and instantiating that program or class of programs is enough to create or be a mind. Not all computationalists agree on how this occurs. Recently they have split into two major camps.

Symbolic computationalism is the traditional version. In it the base computational tokens are also the base objects of semantic interpretation. The tokens are thus representations. The fundamental objects that the system computes with are alleged to contain the meaning for the system as well. So if a system as a whole was currently representing dog, then each individual token would represent, say, a particular feature of a dog. There would be a 4 legs token, a cold, wet nose token and so forth. Each of these tokens would directly refer to and represent a feature of the dog (This gets a bit clearer in contrast to subsymbolic computationalism below). This is the dominant form of computationalism, but a new arrival is gaining quite a following of its own.

Subsymbolic computationalism is inspired by the connectionist movement in computer science (Rumelhart, McClelland, & the PDP Research Group 1986). Subsymbolic computationalism splits levels and has its lowest level of semantic interpretation above the level of computation. As Chalmers explains it, the base computations take place at one level and in some manner blend together to give rise to meaning at a higher level. As he states the symbolic/subsymbolic distinction (Chalmers 1992, p.34):

In a symbolic system, the objects of computation are also the objects of semantic interpretation. In a subsymbolic system, the objects of computation are more fine-grained than the objects of semantic interpretation.

The representations are distributed over a set of computational tokens, rather than there being a one to one relation between them. The entire network, or at least a significant portion of it, is the representation. Furthermore, depending upon the network's pattern of activation, it can be any of a number of representations. This type of non-atomic, distributed representation is a feature unique, and quite important, to subsymbolic computationalism (Hinton, McClelland, Rumelhart 1986). To continue the dog example, if a subsymbolic computational system is currently representing dog, each token would not represent a feature of a dog, as they would with a type of symbolic computational system. In fact, each token would not refer directly to anything at all and would consequently have no meaning. The only meaning is in the entire system's particular state of activation, in this case dog. Alter any single token and the representation of the entire system will change. Even though each token plays a causal role in the distributed representation, none of them carry any semantic burden. All that they do is crunch and gurgle the information in a particular way so that the system as a whole can have meaning.

Subsymbolic computationalism may have stemmed from connectionism, but there are indeed very significant differences. Connectionism is best seen as a computer science style of computing and subsymbolic computationalism as the philosophical theory. Connectionist machines stand in contrast to Von Nuemann computers (the standard for nearly all computers since the 1940's), but connectionist computation can be implemented as a symbolic rather than subsymbolic system. It is solely dependant on whether or not the base computational tokens are also the objects of semantic interpretation. The interconnection based system may lend itself more easily to the subsymbolic style of computationalism, but neither is necessary for the other.

The fundamental and most relevant difference between the two forms of computation concerns the minimum level of semantic interpretation. It is either at the level of base computation or higher. Even higher levels of meaning are definitely possible (if any are) in both systems. In most variations, though, any higher meaning is dependant in some way on the presence of the lowest level of meaning. Consequently, it is this lowest level of semantic interpretation that I will be concerned with.

5. Clarifying the Chinese Room

So far we almost have the entire groundwork for this issue laid out. We have a general understanding of Searle's argument, a clearly defined set of terminology, as well as the opposing views explained. The last thing that remains is to get a clear understanding of the Chinese Room that we can deal with and not concern ourselves with intuitions and thought experiments. I will restate Searle's argument with the terminology I laid out. Hopefully, this is merely a fine-tuning of the argument that remains true to Searle's intentions.

1) Manipulation of tokens by rules that have no reference to the content of those tokens in and of itself is not enough to endow the system with internal meaning.
2) Minds have internal meaning.
3) Computational systems are entirely defined by rules that have no reference to the content of their tokens.
Conclusion: Computational systems alone are not sufficient to have or create internal meaning, and therefore, a mind.
Basically, if the objects of semantic interpretation operate under rules that have no reference to their content, then they necessarily cannot have content.

A reliable test to see if rules do, indeed, operate without reference to content, is to see whether tokens are interchangeable without violating the rules or reworking the system. To continue the analogy with language, English grammar is a set of rules that operate with no reference to content. They merely look at the part of speech of the word and not its meaning. This can be shown be interchangeablity. The sentence "The pills killed the virus." can be changed to "The pills killed the Vice President." and "The pills killed themselves." without violating any grammar rules. Grammatically, meaning does not matter in the least.

The rules of English semantics is another matter. Going back to the three lethal pill sentences, the first two are semantically correct though with greatly different meanings, whereas the third violates the rules of semantics. There is something intrinsic to the concept of pills that prevents them from killing themselves. The words are not interchangeable. Semantic rules do operate with reference to the internal structure and relations of the words.

For the computational systems, interchangeability shows whether the tokens operate under rules that have reference to their internal content or not. If the rules operate with no reference to the tokens' contents, then they are taken and used as faceless wholes; any meaning or internal structure that it may possess is irrelevant. The meaning of a particular token could be switched with another and the rules would function just fine and dandy. Returning to the dog example, we could decide that the token for cold, wet nose would instead mean 4 legs and the system would compute just fine. They are just labels for the tokens and there is nothing intrinsic in the tokens themselves that force them to represent one thing instead of another.

6. The Chinese Room and Symbolic Computationalism

Now that we have cleared up Searle's argument and the two main views he is attacking, I will look at how the two forms of computationalism stand up to the Chinese Room. Searle uses his argument rather effectively against symbolic computationalism. In fact, it is apparent that Searle had symbolic computationalism in mind when he formulated the Chinese Room. The objects of minimum semantic interpretation are also the base tokens of computation. Due to their atomic nature, the tokens are interchangeable. For example, a particular variable could mean anything and the system would function normally. The rules would compute over it exactly the same manner no matter what.

This follows Searle's argument exactly. The process of computing over these tokens has no regard to their content, and therefore cannot give them content. Since the rules are unable to give those objects internal meaning, there is no other alternative or alternative sources for meaning to arise in the system itself.

Some symbolic computationalists have tried to find ways of refuting this argument, but that will not be my concern here. Basically, symbolic computationalism walks directly into Searle's argument.

7. The Chinese Room and Subsymbolic Computationalism

What about subsymbolic computationalism? With subsymbolic computationalism, Searle would most likely feel that since it is computational also, then it must follow the same argument the same way. Chalmers thinks differently. He believes that the differences between symbolic and subsymbolic computationalism are significant enough to warrant consideration. He also believes that these differences indeed set subsymbolic computationalism apart from symbolic computationalism when dealing with the Chinese Room.

At the lowest level of computation, the rules operate without any reference to the content of the tokens, just as with the symbolic computationalism. The system is still computational. However, the subsymbolic computationalist does not claim that meaning exists at this level. The splitting of levels is the key to meaning in this system.

At the minimal level of semantic interpretation, which is above the level of computation, the tokens now have a rich internal structure made up of the base computational tokens due to their distributed representational and therefore non-atomic nature. Whereas the computational tokens may be interchangeable, tokens at this level of semantic interpretation have an internal structure that does not allow them to be interchanged. The internal structure of these representations are relevant simply due to the fact that they overlap. Interchanging two tokens would alter the internal structure of a third one. Therefore, at this level the system does operate by rules that have reference to the content of its tokens. This, Chalmers believes, refutes the Chinese Room argument.

So where exactly does Chalmers think Searle goes wrong. Chalmers believes that Searle's first axiom (Syntax is not sufficient for semantics) is simply false. As an example, he brings up the human brain.

On the highest level, the human mind seems extremely flexible, producing the very antithesis of rule-following behavior; yet at the bottom level, it is made up of a physical substrate, consisting of such entities as elementary particles and electric charges, whose actions are determined by the laws of physics (Chalmers 1992, p.39).
In other words, the human brain follows the rules of physics which are merely syntactic rules. Since the brain has internal meaning (if anything does), then the axiom that syntax is not sufficient for semantics is false. In the case of the human brain, it clearly is possible. Chalmers takes this to mean that syntax can be, in fact, sufficient for semantics, only at a higher level. His reformulation of Searle's first axiom would be:
1' Manipulation of tokens by rules that have no reference to content in and of itself is not enough to endow these tokens (not the system) with internal meaning.
Concl' Computational systems alone are not sufficient to endow the computational tokens with internal meaning.
Chalmers finds that this reworking still has the same force against symbolic computationalism, but not against subsymbolic computationalism. After letting the subsymbolic variation slip through, it actually seems to form a stronger argument by aptly compensating for Chalmers' objection.

8. Where Chalmers Goes Wrong

Now that I have laid out Chalmers' view, in true philosophical fashion, let's see what is wrong with it. I agree with Chalmers in that the non-interchangeability of subsymbolic computationalism shows that it is immune to the Chinese Room argument, and therefore somewhere Searle is wrong. Where I depart from Chalmers is on where Searle's error occurs. For starters, what Chalmers is proposing is a reworking of what is meant by syntax and semantics. He is saying the syntax is indeed sufficient for semantics, but merely at a higher level. It may not be that simple though to completely alter our notion of these concepts that quickly and easily. This revisionary position rests on his analogy to the human brain. Herein lies Chalmers' problem.

He compares the manner in which the human brain is governed by the laws of physics to the manner in which a computer program follows a set algorithm. Are these two actually equivalent? I do not believe so. These are two fundamentally different types of processes. The laws of physics are far different from the syntactic rules that make up computer program algorithms. The computer program rules fit into the linguistic explanatory framework, they can be clearly understood as syntactic, or even potentially semantic. However, it is quite a stretch to attempt to fit the laws of physics into that kind of linguistic framework. This "rule following"/"law governed" distinction is a rather fine one, but clearly applicable in this instance.

Another possibility that Chalmers briefly touches on is to use the human brain example again except not quite at the molecular level. It could still be possible to look at the human brain as a purely syntactic system on the neural level. The behavior of all the brain's neurons could be explained with rules that lack reference to any internal content. Hence the brain is a purely syntactic system at the neural level (rather than molecular level). The problem here is that this presupposes computationalism. Even though most debate is about the rules that concepts operate by, whether or not the brain operates by purely syntactic rules at the neural level is also still in dispute. Searle could simply fail to agree that the neural level operates by syntax alone. Since this matter is still in contention, it is consequently not a viable option for Chalmers. Perhaps there is actually another direction to carry Chalmers' objection.

9. Reworking the Subsymbolic Computational Reply

Let's take a moment to catch our breath. So far I have cleared up Searle's Chinese Room argument, and then looked at how each of the forms of computationalism stand up against it. Symbolic computationalism appears to walk right into Searle's hands. Chalmers attempts to prove that subsymbolic computationalism, due to its distributed representational nature, has non-interchangeable tokens and is therefore immune to the Chinese Room. However, Chalmers' attempt to show where Searle goes wrong falls short. Now, I will put forth where I believe that Searle goes wrong and how Chalmers' objection best applies.

Can a system that follows rules that have no reference to the meaning of their tokens endow itself with meaning? If a system is defined entirely by those rules alone, as Searle's third axiom claims, then it cannot. A system that is entirely syntactical cannot, by definition, have semantics. Syntax is a matter of rules that have no reference to meaning, and since the system has the characteristics of the rules, then the system has no meaning. In other words, syntax simply is rules without semantics, and so a system that is entirely syntactical is entirely without semantics.

Does splitting levels actually help bring about the possibility of meaning? Actually, it does indeed help subsymbolic computationalism circumvent the Chinese Room argument. It does not, however, provide a solid base for having syntax create semantics at higher levels. It may or may not be true, but I do not feel that Chalmers provides enough support to alter people's notion of syntax and semantics. That has the potential of being a very large argument in itself. A stronger and more appropriate path, in my opinion, is to apply Chalmers' basic objection to the third rather than first axiom:

3) Computational systems are entirely defined by rules that have no reference to the content of their tokens.

Are computational systems entirely defined this way? For any system in general, it is more appropriate to look at the minimal level of what you are looking for. If you were looking to see how it computed, you would look at the base level of computation. If you were looking for meaning, you should look at the minimal level of semantic interpretation, and not necessarily at the base level of computation. If the two levels are separate, as they are in subsymbolic computationalism, then one should look at the minimal level of what you are looking for and not lower. Looking any lower than what is claimed would be irrelevant.

Searle commits this error in the case of subsymbolic computationalism. Its supporters claim to have (or to have at least the possibility of) meaning at a level higher than the level of computation. Searle then looks to the level of computation and, finding no meaning, claims that there is not any in the entire system.

However, this notion of having computational and representational levels is rather difficult to discern. I believe that it is better said that the system is functionally two separate systems. Going back to my definition of a computational system, we find that the two different levels operate under different sets of rules. The computational level uses rules that do not have reference to the content of its tokens, whereas the semantic interpretation level follows rules that do have reference to the tokens' content, as Chalmers stated. This would differentiate them functionally as two separate systems.

With two systems, the computation-only system falls to the Chinese Room. This is not a problem since no subsymbolic computationalist ever claimed meaning there. The semantic interpretation level, however, is resistant to Searle's argument. It operates by rules that do have reference to content and therefore can have meaning.

These two systems are not entirely independent of each other. I mentioned that they are functionally two separate systems, because they are really two different perspectives on one physical "thing". It is one physical system, but functionally two separate systems. You can look at the initial system as either a syntactic or semantic one as long as it has the proper layout as a subsymbolic computational system.

Since we are now dealing with one "thing" that is actually two separate systems, it would seem that a new concept is in order. Perhaps it is best to consider these to be computational entities. A computational entity is a physical "thing" that is fundamentally implemented by a computational system. With these computational entities, we could construct it by rules that have no reference to the internal content of their tokens in the proper way so that a second system is created within the same entity that operates by rules that do indeed have reference to the internal content of their tokens.2

Simply put, under my view, talking only about different systems can get quite confusing quickly. One entity could instantiate several different systems. With the subsymbolic computational entities, we are only considering the base level of computation and the level of semantic interpretation. Other examples of entities instantiating multiple systems include our classic language analogy. A human language can be seen as one entity that has various systems with independent rules, namely phonetics, grammar, word meaning, and sentence meaning. So now my restatement of the third axiom and conclusion are:

3' Computational systems are entirely defined by rules that have no reference to the content of their tokens; however, a computational system may be only one of a multiple number of systems within a single computational entity.
Concl' The computational systems instantiated within a computational entity are not sufficient to have or create internal meaning and therefore a mind within those computational systems.

10. Is this Still Computationalism?

Some computationalists may not agree with this version though. The objection has been raised that this may no longer be computational. I believe that it is still within the bounds of computationalism, and is just not as extreme as the other views.

The meaning itself is not directly captured algorithmically, it is captured indirectly though. A physical system can have meaning if and only if it has a certain layout when viewed computationally. If it has a different computational layout, then the physical system will lack internal meaning no matter how viewed. The system of semantic interpretation and the system of base computation function independently of each other by different sets of rules but their existence is still intimately connected. By looking for a computational entity with possible multiple systems, rather than a single computational systems, I believe we stand a much better chance of arguing against views such as Searle's. I feel this is enough for it to be considered computationalism and at least in support of Artificial Intelligence research.

11. Conclusion

We have seen that Searle's Chinese Room argument (when properly clarified) presents quite a problem to symbolic computationalism. However, Chalmers shows that the Chinese Room does not deal with subsymbolic computationalism in the same manner.

At first he tries to prove that by having the level of computation below the level of semantic interpretation, it is possible for the lower syntax to create higher level semantics. This view, however, attempts to alter our concepts of syntax and semantics on the basis of the brain being a syntactic system at the molecular level but still possessing internal meaning. This ignores the distinction between being governed by the laws of physics and following algorithmic rules.

An alternative is to view the one physical system from two separate perspectives, making it functionally two separate systems. This perspective shift is relevant to subsymbolic computationalism due to the internal structure of the distributed representations. Therefore, subsymbolic computationalism is able to circumvent the Chinese Room argument.

In certain cases, a base computational system can now be seen as more than just a collection of rules with no reference to meaning. It is reasonable to look at the entire entity instantiating the computational system from a higher perspective where meaning is possible. At the very least, the Chinese Room does not prevent meaning from occurring in all computational entities.


Bibliography

Chalmers, D."Subsymbolic Computation and the Chinese Room," Symbolic and Connectionist Paradigms: Closing the Gap. Hillsdale: L. Erlbaum Associates, 1992.

Dennet, D. Consciousness Explained. Boston: Little, Brown and Company, 1991.

Hinton, G., McClelland, J., & Rumelhart, D. "Distributed Representations," Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge: MIT Press, 1986.

Rumelhart, D., McClelland, J., & the PDP Research Group. Parallel Distributed Processing: Explorations in the Microstructure of Cognition. Cambridge: MIT Press, 1986.

Searle, J. "Minds, Brains, and Programs," The Behavioral and Brain Sciences, 3, 1980.

Searle, J. Minds, Brains, and Science. Cambridge: Harvard University Press, 1984.


Footnotes

  1. One point to clear up is that since Chalmers is trying to disprove an impossibility argument, all he needs to do is prove a possibility. Consequently, the meaning we will be concerned with can best be termed semantic interpretation and not necessarily realized semantics.
    [Return]
  2. It is possible to have a computational entity that has only one system. Implementations of symbolic computationalism would be such entities. In the case of single systems, though, the clarification of calling it a computational entity is not necessary. The term is really only useful when dealing with multiple systems within a single entity.
    [Return]

Back to the top.
Back to my papers.
Back to my main page.