Searle’s Chinese Room

Any topics primarily focused on metaphysics can be discussed here, in a generally casual way, where conversations may take unexpected turns.
Infrasonic
Posts: 2
Joined: Thu Aug 19, 2021 2:08 pm

Searle’s Chinese Room

Post by Infrasonic »

Hi everyone-first post, so I’ll keep it short.

In p.52 of Rationalist Spirituality, Bernardo makes the following claim about Searle’s Chinese Room thought experiment:

“Now let us extend the thought experiment a bit ourselves. If the clerk, having internalised the entire manual, were also to learn the associations between each Chinese character and the entity of external reality it refers to, then I guess we would be safe in saying that he would indeed understand Chinese. In fact, this would be the very definition of learning a new language: the manual would give him the grammatical and syntactical rules of the Chinese language, while the grounding of Chinese characters in entities of external reality would give him the semantics. But notice this: the key reason why we feel comfortable with this conclusion is that we assume the clerk to be a conscious entity like ourselves”


Really? Isn’t it a huge logical leap to invoke consciousness as necessary for understanding? Surely, as any computer scientist like Bernardo would know, a computer can be made to understand a language just as well as a human can - it is merely an algorithmic question, coupled with extensive and highly dimensional datasets for the computer to train on.

We know computers aren’t conscious. And we know that computers *can* understand just as well as we do. So why does BK invoke consciousness as necessary for understanding? Am I misunderstanding something or missing something?

Thanks in advance for your time and patience with a newbie to your forum like me 😊

Infrasonic
Infrasonic
Posts: 2
Joined: Thu Aug 19, 2021 2:08 pm

Re: Searle’s Chinese Room

Post by Infrasonic »

Funnily enough, in the very next page (I am reading the book live), Bernardo uses that exact same example - that of a supercomputer - and yet maintains the claim that:

"the very notion of understanding resides eminently in conscious experience"

Aren't we confusing qualia, what it feels to understand, with understanding? Can one possibly claim that an average human speaker of Chinese "understands" it better than a fluent supercomputer does? As long as the computer is dimensional enough in its training of Chinese, its model -think of it as a weighted semantic graph to simplify- will indeed *understand*. It may have no feeling or conscious experience of that understanding, but why does that limit understanding?

I understand and agree with the notion that intelligence does not confer consciousness. But why is *understanding* part of the consciousness camp now, vice just being the product of (just) very high and effective intelligence?
User avatar
AshvinP
Posts: 5465
Joined: Thu Jan 14, 2021 5:00 am
Location: USA

Re: Searle’s Chinese Room

Post by AshvinP »

Infrasonic wrote: Thu Aug 19, 2021 3:34 pm Funnily enough, in the very next page (I am reading the book live), Bernardo uses that exact same example - that of a supercomputer - and yet maintains the claim that:

"the very notion of understanding resides eminently in conscious experience"

Aren't we confusing qualia, what it feels to understand, with understanding? Can one possibly claim that an average human speaker of Chinese "understands" it better than a fluent supercomputer does? As long as the computer is dimensional enough in its training of Chinese, its model -think of it as a weighted semantic graph to simplify- will indeed *understand*. It may have no feeling or conscious experience of that understanding, but why does that limit understanding?

I understand and agree with the notion that intelligence does not confer consciousness. But why is *understanding* part of the consciousness camp now, vice just being the product of (just) very high and effective intelligence?

You are confusing programmed rote intelligence with "understanding". How can we have true understanding if the world fundamentally consists of qualitative meaning (under idealism), yet we can never experience that meaning? It seems pretty obvious to me that is not possible. If I am given a copy of all Shakespeare's plays and recite them with perfect articulation, intonation, etc., but never experience the underlying meaning conveyed, does that mean I am the foremost expert on understanding all things Shakespeare? Obviously not.
"Most people would sooner regard themselves as a piece of lava in the moon than as an 'I'"
Starbuck
Posts: 176
Joined: Sat Jan 16, 2021 1:22 pm

Re: Searle’s Chinese Room

Post by Starbuck »

AshvinP wrote: Sat Aug 21, 2021 1:22 pm
Infrasonic wrote: Thu Aug 19, 2021 3:34 pm Funnily enough, in the very next page (I am reading the book live), Bernardo uses that exact same example - that of a supercomputer - and yet maintains the claim that:

"the very notion of understanding resides eminently in conscious experience"

Aren't we confusing qualia, what it feels to understand, with understanding? Can one possibly claim that an average human speaker of Chinese "understands" it better than a fluent supercomputer does? As long as the computer is dimensional enough in its training of Chinese, its model -think of it as a weighted semantic graph to simplify- will indeed *understand*. It may have no feeling or conscious experience of that understanding, but why does that limit understanding?

I understand and agree with the notion that intelligence does not confer consciousness. But why is *understanding* part of the consciousness camp now, vice just being the product of (just) very high and effective intelligence?

You are confusing programmed rote intelligence with "understanding". How can we have true understanding if the world fundamentally consists of qualitative meaning (under idealism), yet we can never experience that meaning? It seems pretty obvious to me that is not possible. If I am given a copy of all Shakespeare's plays and recite them with perfect articulation, intonation, etc., but never experience the underlying meaning conveyed, does that mean I am the foremost expert on understanding all things Shakespeare? Obviously not.
I agree. Joscha Bach makes that very error throughout this new podcast. Im starting to think he is a programmed algorithm

https://www.youtube.com/watch?v=rIpUf-Vy2JA
Post Reply