... how interesting that people are still completely captivated by the elusive ill-defined concept of consciousness
instead of asking about agency and behavior. but of course philosophy loves to paint itself into utterly useless corners. (and sure, it might even serve some abstract purpose, etc.)
...
initially I just wanted to write about how little use I think Searle's Chinese room experiment has. after all it's too devoid of any constraints. again something too abstract. (which is itself a strange infinite dimensional logical cul-de-sac.)
so of course the problem is that a computer that "behaves" as if understands Chinese, by answering to Chinese input with Chinese output doesn't mean much. what does it even mean to "behave as if one understands a language". sure, later there's some mention of passing a Turing test. but that's a pretty low bar. of course Searle himself wouldn't learn much from carrying out the steps of the computer by hand.
but, let's look into this thing, because it's one of those abstract unconstraints that easily can hide the required complexity.
yes, just as a beginner-level performance art someone did some bitcoin (or whatever cryptocurrency) block computations by hand, we could do program steps by hand required to pass a Turing test, but there's an obvious difference. first of all it would be fracking hacking clackin' dickbendingly hard to do it fast enough so that the evaluator on the other side of the Turing test wouldn't notice it. (but okay, okay, let's slow down both "test takers". but then still, let's ask them why it's taking so long to answer. again, sure the room can just answer something that sounds convincing, right? let's say it says "oh sorry, I'm cooking dinner while taking the test". okay, then will the evaluator jump on this? does it matter? okay, can we logically color this corner with one color?)
either the room knows it's slow and fakes answers accordingly, or it doesn't really know it, so it can give inconsistent answers. (it can for example just deny it and "hope for the best".)
okay, but does behaving as knowing Chinese depend on speed?
sure, sure, again we can try to say that no, we do everything through snail mails, and we have Chinese penpals ... and let's to the Turing test through boring old paper letters.
but then I claim that it's a piss poor test, and yes, speed matters. agency matters.
(but I also subscribe to the consciousness is a spectrum theory, ie. panpsychism, and a slow digital room can demonstrate as much consciousness as Searle by carrying out things by hand.)
okay, but does the machine understand the conversation, well, it depends on the program. I think it depends on having some storage. it depends on learning.
and it's strange that Searle can carry out the step-by-step instructions without understanding, but our cells also do so ... so it's not that surprising.