Searle’s Chinese Room
Posted: Thu Aug 19, 2021 2:25 pm
Hi everyone-first post, so I’ll keep it short.
In p.52 of Rationalist Spirituality, Bernardo makes the following claim about Searle’s Chinese Room thought experiment:
“Now let us extend the thought experiment a bit ourselves. If the clerk, having internalised the entire manual, were also to learn the associations between each Chinese character and the entity of external reality it refers to, then I guess we would be safe in saying that he would indeed understand Chinese. In fact, this would be the very definition of learning a new language: the manual would give him the grammatical and syntactical rules of the Chinese language, while the grounding of Chinese characters in entities of external reality would give him the semantics. But notice this: the key reason why we feel comfortable with this conclusion is that we assume the clerk to be a conscious entity like ourselves”
Really? Isn’t it a huge logical leap to invoke consciousness as necessary for understanding? Surely, as any computer scientist like Bernardo would know, a computer can be made to understand a language just as well as a human can - it is merely an algorithmic question, coupled with extensive and highly dimensional datasets for the computer to train on.
We know computers aren’t conscious. And we know that computers *can* understand just as well as we do. So why does BK invoke consciousness as necessary for understanding? Am I misunderstanding something or missing something?
Thanks in advance for your time and patience with a newbie to your forum like me
Infrasonic
In p.52 of Rationalist Spirituality, Bernardo makes the following claim about Searle’s Chinese Room thought experiment:
“Now let us extend the thought experiment a bit ourselves. If the clerk, having internalised the entire manual, were also to learn the associations between each Chinese character and the entity of external reality it refers to, then I guess we would be safe in saying that he would indeed understand Chinese. In fact, this would be the very definition of learning a new language: the manual would give him the grammatical and syntactical rules of the Chinese language, while the grounding of Chinese characters in entities of external reality would give him the semantics. But notice this: the key reason why we feel comfortable with this conclusion is that we assume the clerk to be a conscious entity like ourselves”
Really? Isn’t it a huge logical leap to invoke consciousness as necessary for understanding? Surely, as any computer scientist like Bernardo would know, a computer can be made to understand a language just as well as a human can - it is merely an algorithmic question, coupled with extensive and highly dimensional datasets for the computer to train on.
We know computers aren’t conscious. And we know that computers *can* understand just as well as we do. So why does BK invoke consciousness as necessary for understanding? Am I misunderstanding something or missing something?
Thanks in advance for your time and patience with a newbie to your forum like me
Infrasonic