Hacker News new | past | comments | ask | show | jobs | submit login
A Short Rebuttal to Searle (1984) [pdf] (stanford.edu)
30 points by headalgorithm on Jan 19, 2019 | hide | past | favorite | 37 comments



Almost 20 years before Searle, a short fictional story by A.Dneprov, "The game", was published in the Soviet pop-science magazine "Knowledge is Power". Not only had it described the essence of the Chinese room setup, it also came to the same conclusion:

> If you, being structural elements of some logical pattern, had no idea of what you were doing, then can we really argue about the ‘thoughts’ of electronic devices made of different parts which are deemed incapable of any thinking even by the most fervent followers of the electronic-brain concept? You know all these parts: radio tubes, semiconductors, magnetic matrixes, etc. I think our game gave us the right answer to the question ‘Can machines think?’ We’ve proven that even the most perfect simulation of machine thinking is not the thinking process itself, which is a higher form of motion of living matter. [1]

[1] A Russian Chinese Room story antedating Searle’s 1980 discussion http://www.hardproblem.ru/en/posts/Events/a-russian-chinese-...


The last sentence seems to sum up nicely my viewpoint on Searle's Chinese Room:

> For all I know, Searle may only be behaving as if he were thinking deeply about these matters. But, even though I disagree with him, his simulation is pretty good, so I’m willing to credit him with real thought.


I've often thought Searle's Chinese box argument was vapid, and have been unsure why so many seem to hold it in high regard. This article was helpful in this regard; Searle seems to think that he can presume that are a set of rules that allow for translation; without knowing those rules (i.e., the "program"), the argument becomes uninteresting.


> vapid

I'm not sure that is the right word, at least for me.

I teach, and every once in a while a person asks a question that I just do not understand. It is a bad feeling; I am left standing in front of everyone saying "Could you repeat it in other words? Could you give another example?"

I have that same feeling on reading Searle's paper, and on hearing him talk about it on In Our Time. He says a number of things that I agree with. Then he says, "Therefore obviously computers cannot understand Chinese." I'm left slackjawed, wondering whether some kind of type error happened or am I just too dumb? (Entirely possible, no doubt.)

I don't have the `vapid' sense that I understand the argument and it is incorrect, rather I'm left with a sense of "What just happened?"


I understand Searle's argument to be that symbol manipulation does not result in understanding. Understanding or semantics is something else. He seems to also want to tie consciousness into understanding. Some people would argue the symbols need to be grounded by experience, and experience has a conscious component.


He stressed on many occasions that computers can understand Chinese, but not the typical digital ones, programmed as simple symbolic manipulators.


Sounds like nonsense. There's no meaningful difference between digital computers and other kinds of physically realisable computers.


There is a difference for Searle if only certain kinds of physical systems are conscious.

The other issue in philosophy of mind is intentionality, which is how any physical system can be about something. Humans are both conscious and possess beliefs, desires and what not.


> There is a difference for Searle if only certain kinds of physical systems are conscious

There is no meaningful difference here either. Physical systems also have bounded information content as per the Bekenstein Bound.

> The other issue in philosophy of mind is intentionality, which is how any physical system can be about something

I don't see the great disconnect here either. You have to treat systems of general intelligence as if they have intentions. What else is there?


Well, the burden of proof is on you. The Turing-Church thesis is not a theorem, it is just a conjecture. It might be true, might be not true. Now, if you define a physical computer as a Turing machine and nothing else than yeah, we, humans might as well be not computers at all.

But that was not my point. Digital computers could be, in principle, conscious, but the idea that you can construct a mind by making a symbolic program is flawed. Human brain could be, in fact, a digital computer, but the mind in it is not a result of some vulgar symbolic computation, but a result of some mysterious computational process, that very possibly has nothing to do with manipulations with some abstract symbols.


> Now, if you define a physical computer as a Turing machine and nothing else than yeah, we, humans might as well be not computers at all.

Gee, I come to the opposite conclusion. If we define what can be computed as what can be computed as a TM then we are TM's, that is, computers. I understand Searle to instead be offering an argument that humans are not TM's, that there is an aspect of computing in the world that is not captured by the definition that Turing gave, and it is whatever is involved in understanding.

But the argument, as I hear it, concludes with something like "and it is obviously wrong to think this is what we are doing when we feel we understand something." That's not obvious, to me.

> some vulgar symbolic computation

I guess I still don't get it. I've not seen any evidence that what goes on between my ears is not doable this way, but I am very ready to concede that the jury is still out. I just don't understand the part where Searle says "obvious."


Is not it a fallacy to search for the reasons why it is not doable? I gave my reasons in the other commentary, you may find them interesting.


> Is not it a fallacy to search for the reasons why it is not doable?

I'm sorry, I'm not understanding. Are you asking if a person should look for types of computation that are more powerful then Turing machines? If so, I agree, you should.


> Now, if you define a physical computer as a Turing machine and nothing else than yeah, we, humans might as well be not computers at all

We're not even Turing machines. The Bekenstein Bound establishes humans as finite state automata.

> Human brain could be, in fact, a digital computer, but the mind in it is not a result of some vulgar symbolic computation, but a result of some mysterious computational process, that very possibly has nothing to do with manipulations with some abstract symbols.

So a magical computer? What purpose in a brains computation does the magic serve exactly?


The very definition of a "symbol" presupposes that there is some entity, which assigns meaning to that symbol. Well, unless you assume that the Nature itself conscious, it does not operate in the terms of symbols and symbolic computations. You can possibly claim that the elementary particles are the symbols of the Nature, and all the physical processes are computations involving such symbols but that would be a degenerate case, which won't help your argument. Mind is a natural phenomenon, which arises as a result of a certain physical process, in matter organized in a certain way. That is all. Until we learn why it happens in that type of matter, we would not be able to recreate minds.


> The very definition of a "symbol" presupposes that there is some entity, which assigns meaning to that symbol

No, that's not what's meant at all. Searle is making a distinction between syntax and semantics, where computers process only syntax (symbols are meaningless tokens), but humans can reason semantically, where propositions carry meaning. Both perform symbol processing, so "symbolic computation" is not a useful distinction here.

Further, Searle is asserting that semantics cannot arise from syntax. But this is clearly false, as various methods of computational induction demonstrate: given only syntactic analysis of output bits, you can build a semantic model to predict subsequent bits.

So what gives the symbols of the semantic model meaning? The correspondence of each symbol to real, observable objects.

> Until we learn why it happens in that type of matter, we would not be able to recreate minds.

You're assuming that consciousness requires some special property of matter. There's no justification for this assumption.

Furthermore, you are arguing that we cannot create something that we do not fully understand. This too seems false. We created fire long before we understood it, for instance.


> Both perform symbol processing

Not sure about that.

> So what gives the symbols of the semantic model meaning? The correspondence of each symbol to real, observable objects.

And what decides what symbol corresponds to what real object?

> You're assuming that consciousness requires some special property of matter. There's no justification for this assumption.

Really? I thought that is something we have agreed upon. You are claiming that any system structured as a digital computer (hence, has a special property) is conscious, I am saying that it is not enough, it has to be structured some special way.

> We created fire long before we understood it, for instance.

This is a laughable argument. "We reproduce (as living beings do). therefore we create minds".


> You are claiming that any system structured as a digital computer (hence, has a special property) is conscious

Nowhere did I make that claim. You're clearly confused about the arguments being made so I suggest your reread this thread.

> And what decides what symbol corresponds to what real object?

An inductive inference algorithm, like I said.

> This is a laughable argument. "We reproduce (as living beings do). therefore we create minds".

Exactly right. Therefore your claim that we must understand something before we can make it is trivially incorrect.


Magic? No, no magic. Mystery.


Old and not a particularly interesting rebuttal. Searle later stressed how the Chinese room argument demonstrates how programs, in and of themselves, cannot generate consciousness. This important aspect is not addressed in the rebuttal.

For an updated overview of the argument and replies, see: https://plato.stanford.edu/entries/chinese-room/


Searle claims that the argument demonstrates this, but his response to the 'systems reply' shows that he does not appear to understand the challenge that it presents to the argument:

"My response to the systems theory is quite simple: let the individual internalize all of these elements of the system. He memorizes the rules in the ledger and the data banks of Chinese symbols, and he does all the calculations in his head. The individual then incorporates the entire system. There isn't anything at all to the system that he does not encompass. We can even get rid of the room and suppose he works outdoors. All the same, he understands nothing of the Chinese, and a fortiori neither does the system, because there isn't anything in the system that isn't in him. If he doesn't understand, then there is no way the system could understand because the system is just a part of him."

Searle does not seem to understand that the systems reply is not dependent on where or how the components of the system are implemented. He apparently cannot conceive that, while the subject's conscious mind is solely occupied with mechanically performing the operations of the algorithm, the system that her conscious mind is part of actually understands something, and that this would be so even if the system were implemented entirely within the physical body of the subject. It is an outlandish concept, but the premise of the experiment is itself outlandish, and a philosopher should not expect ordinary intuitions to be a reliable guide to what would happen in such cases.


There are two types of expanded responses to the Systems Theory: the first is a line of argument extending into modal logic and the second is a line of argument that blurs the lines between the Chinese Room Argument and other arguments that purport to show computation as insufficient for 'mind'. Of course - as often happens over decades of discourse in analytical philosophy - the boundary between even these two lines of argument that I just classified is rather fuzzy (shoutout to Zadeh).

The SEP entry mentions Schaeffer (2009) and Nute (2011) for the line of disagreement extending into a discussion of modality.

Regarding the second line of argument, the entry mentions Harnad (2012) and says that he appears to follow Searle in, "linking understanding and states of consciousness" as well as arguing "that the core problem of conscious "feeling" requires sensory connections to the real world."

The primary issue in all of this is whether certain non-biological systems (i.e. a computer) are sufficient for the understanding generated by certain biological systems (i.e. humans). This has not much of anything to do with physicalism or dualism, no matter what certain responses by non-experts (Penrose, Kurzweil, etc.) would have some believe. The fact that Cole even mentions Kurzweil is, arguably, a disservice to those outside the relevant domain.

The take-away is this: in analytical philosophy, many famous thought experiments are important not for their intended conclusions, but for the expanded dialectic they generate, and which lasts for decades (if not longer). The expanded dialectic brings additional clarity and raises important questions that had either gone unnoticed or were impossible to see before the thought experiment.


Decades of arguments attempting to short-circuit the scientific process, by claiming to show that minds cannot possibly work this way or that way, do not seem to have contributed much, if anything, to our understanding of how they do work. It all seems very self-referential, in that there is a lot of arguing about arguments.


I think the suggestion that lengthy arguments in the philosophy of mind are attempting to short-circuit scientific progress is a bit uncharitable.

There is a small minority of individuals on both sides that seem to want to slow scientific progress (whether intentionally or not). Now, the vast majority of the time it is presumed people like Searle fit this description. People like Dennett fit this description as well, though. Anyone truly making definitive claims about having resolved the question of (say) conciousness, is not taking the right tack from a scientific perspective.


I did not intend to suggest that there is anything wrong with attempting to cut the Gordian knot with an insightful analysis, it is just that, in these cases, it does not seem to have led to anything useful. And Dennett's book 'Consciousness Explained' is certainly presumptuously titled, though he is at least looking for explanations, when he is not confronting those who claim it can't be done. Meanwhile, the state of technology has advanced to the point where we can at least imagine Searle, in his room, competing successfully in a Chinese-language game of Jeopardy.

No-one ever disproved vitalism, but its implicit threat to the progress of biology simply dropped by the wayside.


The chinese room thought experiment is more like Searle's attempt to get you on board with his intuitions about the philosophy of mind - but taken on its own as an argument it's pretty unsound.

His real argument exists in a bunch of stuff he wrote about intentionality (the real argument probably still fails - but it's more nuanced than the Chinese room stuff everyone talks about).

If you want a really good paper about the problems around machines and minds check out "Troubles with Functionalism" by Ned Block


Thought experiment is the right term for it. It applies equally to a brain if you think that the chemical and electrical processes in the brain are identical to understanding. Searle believes that consciousness is a fundamental fact.


> While I agree that AI research has a long way to go (perhaps several decades) before it might produce responsible machines

...written in 1984. I do not believe we've made any progress in terms of creating "responsible machines."

My general feeling about the Chinese Room argument is that it discusses the "psychological phenomenon of understanding." We don't really know what that phenomenon is, it's a feeling or sensation from our perspective (or at least from mine) and so I'm uncomfortable ascribing much importance to it. Until we understand how our minds actually work, the "Turing test approach" makes sense to me: if we can't tell the difference, then it's "responsible."


> Whatever the key to self-reflection turns out to be, it clearly will involve the processing of internal symbolic representations.

Clearly?


Yes, back then AI researchers thought that propositional logic was how human intelligence worked. Later, they abandoned this idea and instead turned to Baeysian probabilities as the most likely mechanism. Later they abandoned this idea and instead turned to connectionism as the most likely mechanism.

Suffice it to say that we didn't have a clue back then and we dont' have a clue now, either. We don't have any good model of human intelligence. We don't understand it. We can't reproduce it. We dont' know how.

And if you think that modern AI research is closer to that target because we now have good classifiers- well, think again. In the past, we had good inference engines. We had good ways to reason with uncertainty. All those things are useful, and they may even be components of intelligence, but they do not, on their own, constitute intelligence.

Clearly.


Depends on your definition of "symbolic".

If you mean symbolic in the sense that computers use symbols, then no, it won't involve that. Humans are self-reflective (at least some of the time), and they don't think that way (with the possible exception of mathematicians).

But words can also be considered symbols, and when we think, we usually think in terms of words. "I" is a symbol, and it's hard to self-reflect without using it.


I think in this context he’s talking about a very specific meaning of symbolic, vs connectionist the two main classes of models of thought. Symbolic is something where you can make out the structure of what’s going on (a requirement for analysis or “introspection”; hence clearly). The alternative is bottom up modelling eg with neural networks where only the outcome is known and the internal processes can’t really be analysed.


My frank thought on this is the AI researchers that placed their bets on symbolic logic were placing their bets on the idea that symbolic logic is at a higher level than connectionist logic.

If you're not emotionally invested that argument is laughable. It's total basis is that human suck at math.


The original artificial neuron, the Pitts-McCulloch neuron, was a propositional logic circuit, conceived entirely on the basis of a model of biological neurons as logic gates, with a threshold function that controlled their true or false value. It was even called the Threshold Logic Unit. The paper where it was first described is titled "A Logical Calculus of the Ideas Immanent in Nervous Activity". In fact, the perceptron too was a logic circuit, a function with two outputs, "positive" and "negative ... or "true" and "false".

AI researchers always used the tools that all scientists use to model their subject, in this case, intelligence: maths. Logic is maths, just as probabilities, calculus, algebra, etc etc.


I think you’ve completely misunderstood what is going on here.


> If you mean symbolic in the sense that computers use symbols, then no, it won't involve that

Why not? Not consciously, but as part of the underlying process that produces consciousness. Particles and fields that make upboir brains carry no more intrinsic meaning than the charged bits in digitial computers.


After reading "I Am A Strange Loop" several years ago, I got on an ouroboros joke kick. I wrote one about John Searle. I'm posting it it here because so far, underneath this pdf is the only place it has ever seemed apropos.

What did the ouroboros say to John Searle?

"Let me out of here, you know I can't speak chinese!"




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: