Talk:Chinese room

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

can someone please explain, in HS english, why this isn't total nonsense?[edit]

so searle doesn't understand chinese, but the program does; what is the big deal ? that is like saying you can't do sign language cause your hands don't understand ASL I'm sorry, I'm not trying to be rude, but this seems like a total waste of time; I must be missing something — Preceding unsigned comment added by 50.245.17.105 (talk) 22:12, 1 March 2017 (UTC)[reply]

I'm afraid that you're not going to get a satisfying reply, because there isn't one. I don't remember who it was exactly, but I remember one philosopher saying that the Chinese room argument is so profoundly and intricately stupid, that the whole field of philosophy of mind was reorganized around explaining why it's so completely wrong. In my experience, most philosophers think this argument is really terrible, but there can be value in articulating why. Other than that, yeah, Searle is mostly wasting everyone's time. Also "Chinese" isn't a language, but that's beside the point.--ScreamingRobot (talk) 07:19, 19 June 2017 (UTC)[reply]
I agree with your main point, but I'd say that Chinese actually is a language. The word refers to Standard Chinese, also known as Mandarin, or, rarely, Standard Mandarin. The word "Chinese" sometimes is used to mean a particular group of languages associated with China, but it's going too far to say "Chinese is not a language". Polar Apposite (talk) 17:37, 16 September 2022 (UTC)[reply]
Plenty of experts consider it to be weak, giving a similar argument. Our own brain consists of "parts using parts", so to speak. I'm actually surprised how little this is represented in this article. I study Artificial Intelligence and you'd think it was already disproved after hearing all the counter-arguments. Wikipedia seems pretty out of date here. Bataaf van Oranje (Prinsgezinde) (talk) 17:59, 18 October 2017 (UTC)[reply]
Yes, there is not even a criticism section in the article. Searching the article for "criticism" reveals a "further reading" section, where there are some links to works containing criticism of the Chinese Room thought experiment. None of the criticism linked to is in the article. Surely it should be? Polar Apposite (talk) 17:41, 16 September 2022 (UTC)[reply]
There used to be a criticism section but it was removed. Look at older versions for revisions by user "likebox". SilverCity1 (talk) 05:12, 3 February 2023 (UTC)[reply]
I looked at some lists of old revisions and used ctrl + f to search them for "likebox" without quotes and I couldn't find them? Is there a better way to search for "likebox"? How old approximately are the versions that you are referring to? Polar Apposite (talk) 15:47, 3 February 2023 (UTC)[reply]
https://en.wikipedia.org/w/index.php?title=Chinese_room&oldid=161355774 SilverCity1 (talk) 21:35, 3 February 2023 (UTC)[reply]
Although the "Critisms" section nailed it, it could have done with a bit of NPV and/or a citation or two. As it is, it looks more suitable to be a talk page comment, and even then would look too much like discussion of the topic rather than discussion of the article. I'm not surprised it was removed. But it should have been improved, not removed completely. This is really weird. Is Searle some kind of sacred cow? If so, why? Polar Apposite (talk) 13:13, 5 February 2023 (UTC)[reply]
I think the issue comes from the use of "understand". Take the statement "Searle can't translate words from chinese to English but the program can". As opposed to "Searle can understand the poem is beautiful but a program can't". If the poem is in chinese then English-only speakers can't really do anything with it. But, if you know chinese then you can understand what is being communicated in the poem and realize that it's beautiful. The thought exercise was initially proposed as a means of separating the syntax of words from the thought processes those words convey. Does that help? Padillah (talk) 15:06, 19 October 2017 (UTC)[reply]


Smart computer scientists will tell you the argument is irrelevant to artificial intelligence.This is not the same as saying that it has been refuted. Searle would agree that this argument doesn't try to put any limits on how intelligent or human-like machines will be in the future.
Contrary to the post above, philosophers take the argument very seriously, because it is an illustration of the hard problem of consciousness, which may the single most important problem in philosophy today. See also Mary's room for another illustration.
Re "parts using parts": the question at issue is how could a program create consciousness? What is it about the "whole" that might make it different than the parts? There is no easy answer to this, and as long as there isn't, the argument remains unrefuted. ---- CharlesGillingham (talk) 05:50, 28 December 2017 (UTC)[reply]
Which philosophers take the argument very seriously? Polar Apposite (talk) 17:51, 16 September 2022 (UTC)[reply]
If the Room says it thinks the poem is beautiful and can explain why in an impressive way while passing the Turing test then what does it even mean to say the Room doesn't understand the poem, including the beauty of the poem. Yes, inside the Room we can see that's a man, who doesn't understand a word of Chinese, with a rule book moving cards around, but if we look inside the head of a Chinese poetry lover, we see a bunch of molecules obeying the laws of physics. Polar Apposite (talk) 17:49, 16 September 2022 (UTC)[reply]
These are my thoughts on the matter. I think the main question is about consciousness, i.e "is the system conscious"? Now "consciousness" is not well defined and can only be understood in the intuitive sense. Is there a reason to think that a system composed of cards, a man dealing them, the rule book etc. is consciously understanding Chinese, while the parts of the system don't? Even if you apply it to directly to computers, we know computers are nothing more than high-voltage, low-voltage signaling machines - everything else one might say about computation is just interpretation. And this could be replaced by anything which were able to reliably represent two distinct values (I figure instead of electricity you could use another type of energy). Is there a reason to think this sequence of two distinct values might, as a system, become conscious?
The only thing we intuitively know is conscious, is us human beings, and for a variety of reasons we infer that this consciousness emerges from and depends on the brain. Or, you might say, the brain is conscious. Now the brain is also a heap of cells, and we can agree this heap of cells somehow becomes conscious. Seeing as this is the only conscious entity we are really sure of, it would be the safest to infer that another entity is conscious only if it contains the same causal mechanism that the brain has to create consciousness. So, It would be intellectually honest to say, that a computer (however implemented) is conscious, only if fulfills the requirement stated above. The problem is (it seems) no one knows what are those causal mechanisms. This question I think might only be answered by advances in the field of neuroscience, if it turns out that the mechanism which creates consciousness is computational, i.e that the part of the brain which creates consciousness is a computer.
Until this happens I see no honest reason to assume that computers are conscious.
So you think that the safest is the only intellectually honest thing one can argue? --ExperiencedArticleFixer (talk) 09:00, 29 August 2019 (UTC)[reply]
We don't actually know that we are conscious, from our intuitive, subjective impressions. We can define any talking, living, human as conscious, provides he makes sense, but that's all. We believe we are. Or we are sure we are. We can't prove that we are, and so we cannot, as scientists, say that we know we are. Likewise we could define consciousness as something only humans have, or even something only nonhumans have, but that is obviously arbitrary and sheds no light on the question. You are doing likewise when you 'infer' that only brains or machines that work the same way as the human brain are conscious. It is equally arbitrary and circular.
We haven't got a clear definition of consciousness, only of a claim that something/someone makes about itself/himself that it/he is conscious.
If it acts conscious, then the burden is on you to say why it is not conscious, and not vice versa. You need to show why it is disqualified despite having jumped through all the hoops by passing the Turing test. You need some rule that you can use to supplement the Turing test. But if it's arbitrary, it proves nothing.
Let me repeat, my subjective impression of being conscious, or even of the sensation of the color red, is not proof that I really have that sensation. It is a belief, even if I believe it utterly. So is my belief that I can intuitively, directly experience my own consciousness, while other people's consciousness is something I have less reason to believe in, even if they say they are conscious just as eloquently as I do.
You are not using an *operational definition* of consciousness. Polar Apposite (talk) 18:18, 16 September 2022 (UTC)[reply]
My personal opinion is that computers (or computer programs) are not conscious, so Searle's argument holds. At the very least, there is no reason to assume that they are.
On a different note, I think the root cause of this problem is actually the subjectivity vs. objectivity problem, but that is beyond the scope of this discussion .. Dannyfonk (talk) 09:18, 31 December 2017 (UTC)[reply]
It's more of an assertion than an argument at this point. If you and others hold the unfalsifiable opinion that computers can't be "conscious", that's fine, but the Chinese Room scenario is so poorly constructed and interpreted that it's baffling to me that an adult with a PhD was able to pass it off as academic work. One day the page intro will point this out, but for now we have to take this garbage seriously. Let me be clear: As an opinion, the idea is fine (albeit unoriginal) and I would entertain the idea as part of my own personal philosophy; as academic work published in a journal, it is garbage by modern standards.
I am a physicist by trade, so I 2001:480:91:FF00:0:0:0:15 (talk) 17:04, 31 December 2020 (UTC)[reply]
The quality of the argument isn't really all that relevant to how it should be treated. It is influential and notable. I'm not certain I have ever met anyone who had a high opinion of the argument or Searle, but I've certainly heard a lot of people talk about it a lot, and the body of published work discussing it speaks for itself.
The notability is really the only relevant factor, but as an aside I would add that there can be a lot of value in a simple, well-expressed presentation of a bad argument. It can make explicit a bad but intuitive idea that others are merely hand waving at, and thus make it easier to refute. See, for example, What Mary Didn't Know. Ncsaint (talk) 12:34, 18 February 2023 (UTC)[reply]
The notability of an argument depends to a large extent on the quality of the argument, I would have thought. Polar Apposite (talk) 15:40, 19 February 2023 (UTC)[reply]
Hi, I’m a philosopher. It’s not garbage, it’s a good argument. Philosophy being a discipline dealing with the basics, sometimes basicness is confounded for silliness. --ExperiencedArticleFixer (talk) 16:56, 16 January 2021 (UTC)[reply]

Spelling nitpicking: minuscule "r" for room? Or uppercase R?[edit]

I've noticed that for "Chinese Room argument" there are many representaions in this article, spelling-wise. "Chinese Room", "Chinese room", "Chinese Room Argument", "Chinese room argument" and "Chinese Room argument". Now some of these mentions are directly quoted from research papers and might be represented in their correct spelling, whereas the article in general uses "Chinese room" or "Chinese room argument" with a minuscule r for room. Not wanting to stir things up I left things as they are, however I do wonder if you think the usage is consistent as-is (I think maybe it is not), or if anyone can decide because they read all the relevant referenced articles that they indeed are (not me, its been too long since I read Searle). Any pointers? Or too much meta already? Vintagesound (talk) 20:08, 25 May 2021 (UTC)[reply]

I would support either convention. "Chinese Room Argument" is the name of the argument. The "Chinese room" is something that appears in the thought experiment. But I imagine that there isn't a super reliable source. What does the Stanford Encyclopedia of Philosophy do? ---- CharlesGillingham (talk) 08:07, 16 August 2021 (UTC)[reply]
Good question, but the SEP article is a mess. It starts of using 'Chinese Room Argument' but later slips into 'Chinese Room argument'; it mostly refers to the room as 'the room' but sometimes refers to it as 'the Chinese Room'.
So I think Wikipedia is on its own, and your first instinct seems best -- 'Chinese Room Argument', 'Chinese room', 'room'. Ncsaint (talk) 12:25, 18 February 2023 (UTC)[reply]

The Game (1961) is a philosophical argument[edit]

You deleted my text and wrote: "The Game is not an argument in the philosophy of mind, and this article is about Searle's argument, not one element of his thought experiment". This is incorrect, because The Game (1961) is "an argument in the philosophy of mind". Possibly you are not familiar with this style of writing of philosophical arguments --- it is called Socratic dialogue, originates from Greece and is popular in East Europe including Bulgaria and Russia. I am familiar with these facts because Bulgaria is neighboring country of Greece. Thus, The Game (1961) is the original argument in Russian that contains both the Chinese room and the China brain arguments. Obviously, the American philosophers have plagiarized The Game as they have not cited it. I do not know any of the two philosophers who are credited separately for the Chinese room (1980) and the China brain (1978) to have stated publicly that they know to read and understand Russian language. During the cold war between the Soviet Union and USA, ideas were stolen by both sides without referencing the other. This ended in the 1990s with the end of the Soviet Union. In the course of 20 years (from 1960s to 1980s) after the publication of Dneprov's work, his story The Game was split into two arguments, although in the original it was combined into a single experiment. I would advise you to go and read the whole story before making further edits. In case you do not understand Russian, here is the full tri-lingual version (Russian-English-Bulgarian) prepared by me (I understand perfectly Russian and have verified every word in all 3 languages) of Dneprov A. The game. Knowledge—Power 1961; №5: 39-41. PDF. A direct quote from the article shows that it is a philosophical argument: "I think our game gave us the right answer to the question `Can machines think?' We've proven that even the most perfect simulation of machine thinking is not the thinking process itself." The original scan of the Russian journal is added as appendix to my translation. The Game is published in the May 1961 issue, just when the Soviet Union beat the USA to send the first human in the outer space -- that is why Yuri Gagarin is on the journal cover. p.s. Please include back the material that you deleted. I would not object if you edit it to suit your own vision of neutrality. Danko Georgiev (talk) 17:46, 19 August 2021 (UTC)[reply]

Nevertheless, this is an article about Searle's argument, not about all arguments of this form. ---- CharlesGillingham (talk) 21:07, 21 August 2021 (UTC)[reply]
I added Dneprov's name to the first paragraph (along with the other precursors mentioned in the history section). Is this acceptable? You may have noticed that, in my last edit, I made sure the article no longer gave any precedence to Searle for this kind of critique. It just says that he is the author of this specific critique.
But I still don't agree that your edits were good idea. Since my previous answer was a bit glib, so I thought I would flesh it out.
My goal here is strictly editorial. Every article has a topic, and it's an editor's job to keep all the material in the article on topic. The title of this article is "Chinese Room", which is a specific thought experiment, not a general approach to these problems. "The Game" and the "Chinese Room" are two different critiques of AI. This article is about only one of them.
Everything in this article is specific to the "Chinese Room" version of this idea. Nothing in this article is about "the Game", or other similar arguments or thought experiments (except the two paragraphs in "History"). The Replies are all from people who replied to Searle's argument, not "The Game". The clarifications in the Philosophy and Computer Science were made by people trying to clarify Searle's version of the idea, not "The Game". Searle's Complete Argument is made by Searle, not by Dnepov. There is hardly a sentence in this article that isn't reporting some bit of academic dialog about Searle's version. None of this academic dialog was about Dneprov's version.
Perhaps American academics should have paid more attention to Dnepov's version. Like you, I have no idea if any of the participants in this 40 year long argument even knew the Dneprov version existed. Perhaps Dneprov's version deserved a bigger footprint in the philosophy of mind, or the history of philosophy. Perhaps one day it will receive that kind of attention. All that might be true. But:
We are merely editors. We report, we do not correct. We can't rewrite academic history from Wikipedia. This article summarizes the 10,000 pages of academic argument that his been published in reaction to Searle's version. That's what the article is. Thanks to you, now it mentions Dnepov as one of the people who had the same idea, quite a bit earlier. But Dneprov's version is not the topic of the article. ---- CharlesGillingham (talk) 21:41, 21 August 2021 (UTC)[reply]
Finally, would you object to moving this conversation to the talk page of Chinese Room? I think it belongs there. ---- CharlesGillingham (talk) 21:41, 21 August 2021 (UTC)[reply]
Dear Charles, thank you very much for the explanations of your thoughts on the editing process. I will not mind if you move the conversation to another talk page if you want. Because I work daily on scientific projects, my thought process is focused mainly on veracity of ideas or arguments. Consequently, I can give you reasons for or against some proposition, but I will leave it up to you to decide what to do. In regard to your general attitude to separate Dneprov's argument and Searle's argument, it is factually incorrect because the Chinese language is not essential to characterize Searle's argument! Consider this: The Chinese room makes no sense to 1.5 billion Chinese citizens who are native speakers of Chinese! All these Chinese people will perform the manipulations and they will understand Chinese, thereby invalidating Searle's conclusion! Only from this global viewpoint that Searle's argument is meaningless to 1.5 billion native Chinese (i.e. 1/3 of all people on Earth!), you can understand the importance of my original edit in Wikipedia where I stated that Searle "proposed Americanized version" of the argument. "Americanized version" means that it makes sense to Americans, but may not make sense to people living in other parts of the world. In particular, if Chinese philosophers want to teach Searle's Chinese room in their philosophy textbooks, the only way to achieve the goal intended by the author is to change the language to some language that they do not understand like the "American room". Now, after I have demonstrated to you that the word "Chinese" in the "Chinese" room is not defining the argument, it is clear that Dneprov's argument is exactly the same argument written for Russians and using Portuguese language that Russians do not understand. What matters from historical viewpoint and in terms of attribution is that it precedes by ~20 years Searle's publication. A philosophical argument is defined by the general idea of proof and the final conclusion. Dneprov uses (1) people to simulate the working of an existing 1961 Soviet computing machine named "Ural", which translates a sentence from some language A to another language B. (2) The people do not understand language A, neither before, nor after they perform the translation algorithm. (3) Therefore, the executing of the translation algorithm does not provide understanding. Final conclusion intended by Dneprov using the technique of Socratic dialogue (i.e. the words are spoken by the main story character "Prof. Zarubin") is that machines cannot think. Searle's argument is identical except for the number of people involved in the translation (1 or many it does not matter) and the specific choices of languages A and B. My general attitude to contribute to Wikipedia is that the English version is for all mankind, and not only for Americans. Therefore, articles should be written from a country neutral perspective and sensitivity to the fact that most Wikipedia users are non-native English speakers across the globe. p.s. If you find a version of Searle's argument written by someone before 1961, then definitely send me a copy of the original text and I will advocate for the attribution of "Chinese room" to the first person who clearly formulated steps (1), (2) and (3) in the argument. Danko Georgiev (talk) 05:57, 22 August 2021 (UTC)[reply]
I still think you're misunderstanding our role here.
This is an encyclopedia article about a historically notable conversation in philosophy. The historical conversation has already happened. We can only report what the conversation was. We can't report what the conversation should have been.
We don't write what we, ourselves, think is true. We report what notable philosophers said. We don't try to put words into their mouth, or rebuild the subject for them. We try to explain what they said, who they said it to, and what the context was, without putting our own spin on it.
So it actually doesn't matter if you're right about this -- if Dnepov was making exactly the same argument. This article can't give the impression that the entire field of philosophy says the same things you do, even if you're right. Again, none of the other people cited in this article think they were writing about Dneprov. Harnad, Dennett, Dreyfus, Chalmers, McGinn, all of them --- they weren't writing about Dneprov. That might not even have heard of Dneprov. They were writing about Searle.
That's what we have to report -- what they thought. We can't take into account what we think. What we think has no place in Wikipedia. So you don't need to keep arguing for this -- no matter how good you're argument is, it doesn't effect the 'editorial' issue.---- CharlesGillingham (talk) 06:49, 23 August 2021 (UTC)[reply]
Dear Charles, I think you have a problem with understanding the meaning of jargon (technical words) used in philosophy, which means that you should probably refrain from editing articles on philosophical subjects. The sense in which the term "argument" is used in philosophy is the same as proof, not as conversation. The intended meaning (semantics) of the expression "Chinese room argument" is the same as "Chinese room proof" and NOT as "Chinese room conversation"! P.S. I have just seen the new revision of the article and do agree with it. A sentence in the introduction on the history of the proof was all that was needed to make things right. Danko Georgiev (talk) 15:45, 23 August 2021 (UTC)[reply]
To other wiki editors: The main issue that I had with the article is the removal of the text on the first documented discoverer Anatoly Dneprov of the Chinese room argument (proof/theorem). I do not object that Searle has come up (possibly independently) with the same proof, and that he is widely mis-credited as the originator of the argument. However, now we have a documentary record of Dneprov's 1961 work and high quality scan of the Soviet journal. So, this historic fact should be mentioned in the wiki article. For the type of wiki edit that I am proposing, I give a concrete example with the Pythagorean theorem, which bears the name of Pythagoras. For a long time, it was taught in Europe that Pythagoras discovered the general theorem, while Babylonians only knew of some special cases called Pythagorean triples (like 3,4,5). However, we now have historic evidence that Indians in the 8th to 6th centuries BCE already knew the Pythagorean theorem as it is recorded in Baudhāyana Sulbasūtra. This information on earliest proof is included in the wiki article on the Pythagorean theorem. The Chinese room argument is a proof or theorem providing negative answer to the question whether machines can think. Dneprov's proof is published in 1961, which is almost two decades earlier than Searle's proof in 1980. Therefore, Dneprov's work has to be mentioned in the article with respect to historic attribution. Dneprov was a Soviet physicist and working on cybernetics. His publication was intended to prove that machines cannot think to anyone interested in science. Danko Georgiev (talk) 16:19, 23 August 2021 (UTC)[reply]

Just to put this to rest -- Dneprov is now credited (in the history section) with having come up with the exact same argument 20 years earlier, but the article as a whole is still about Searle's version and the replies to it. ---- CharlesGillingham (talk) 09:27, 11 September 2021 (UTC)[reply]

Brain replacement scenario[edit]

I don't particularly see how this (or any other reply involving an artificial brain) actually is a reply to the Chinese Room, a thought experiment involving a set of instructions and a processor. More specifically though, the quote from Searle in this section is certainly wildly out of context; it's framed here as Searle's prediction of why brain replacement could not result in a machine consciousness, but in Rediscovery of the Mind, the source of the quote, Searle brings up brain replacement to illustrate the difficulty in inferring subjective mental states from observable behaviour, and this description of brain replacement resulting in gradually shrinking consciousness is alongside two alternative scenarios in which consciousness is entirely preserved (one in which behaviour is also preserved, and one in which it is not). It's not a prediction, or a reply to a reply to the Chinese Room argument — except perhaps thematically. LetsEditConstructively (talk) 14:14, 8 November 2021 (UTC)[reply]

I think we should cut the Searle quote. Or we could remove the word "predicts" and give a more accurate context of the quote, if you're up for it.
However, I think we should leave the scenario in, as one more item in the list, even if Searle himself wouldn't call it a "reply". Three reasons:
  • Russell and Norvig (the leading AI textbook) brings it up while discussing the Chinese Room, so we are justified in bringing it up here.
  • The scenario is on topic. You said that Searle brings it up while discussing "The difficulty in inferring subjective mental states from observable behaviour". This is a key issue in the Chinese Room argument. (Some would say it's the only issue. Colin McGinn, for one.)
  • The scenario is a legitimate problem for the CRA, at least as much as the other replies in this section. We know that: (1) Brains have subjective mental states. (2) A brain replaced by tiny, parallel digital machinery is equivalent to a digital computer running a program. (3) Searle thinks that the Chinese Room shows that it is impossible for a digital computer running a program to have subjective mental states. So, if Searle is right, then (1+2+3) consciousness must be present at the start of the procedure and absent at the end. It's hard to say exactly how and when it disappears, and it's easy to imagine that it just doesn't disappear and that Searle is wrong. It's an argument from intuition, like most of the other replies.
That's my two cents. Let me know if you agree. ---- CharlesTGillingham (talk) 07:47, 12 July 2022 (UTC)[reply]
I have incorporated your thoughts into the article ---- CharlesTGillingham (talk) 01:41, 2 August 2022 (UTC)[reply]
I probably should have left the first sentence off my initial post, because my main concern was how the quote from Searle was incorporated into the section. I think the new text/footnotes are much better, thank you!
It isn't relevant to the article but I have to ask: do we know (2)? They might be computationally equivalent, in that the same operations can be performed on either one (or on a very large piece of paper, for that matter), but they certainly aren't physically equivalent. Treating them as the same thing seems like begging the CRA's question. LetsEditConstructively (talk) 12:51, 1 February 2023 (UTC)[reply]
I suppose I mean that they aren't equivalent structurally, rather than physically. An artificial brain's ability to converse in Chinese is presumably intrinsic to the arrangement of its neurons rather than depending on the program its neurons execute, while a program that converses in Chinese (even one that functions by simulating neurons) should be able to do so regardless of the specifics of the hardware it's run on. Anyway this is getting fairly forumish so I'll drop it. LetsEditConstructively (talk) 23:00, 1 February 2023 (UTC)[reply]

Paragraph in the wrong place?[edit]

This paragraph: "Searle argues that his critics are also relying on intuitions, however his opponents' intuitions have no empirical basis. He writes that, in order to consider the "system reply" as remotely plausible, a person must be "under the grip of an ideology". The system reply only makes sense (to Searle) if one assumes that any "system" can have consciousness, just by virtue of being a system with the right behavior and functional parts. This assumption, he argues, is not tenable given our experience of consciousness." seems to be out of place in the Churchland Luminous Room argument section. Does it belong elsewhere, or does it need to edited to make it refer specifically to Churchland? Polar Apposite (talk) 08:45, 14 September 2022 (UTC)[reply]

Churchland's luminous room is not supposed to be a section header, it's just a boldface name of the argument given in that one paragraph, which is a weird layout, used throughout the article.
The final two paragraphs are both in reply to any of the arguments in this section (that is, arguments that say Searle's argument depends only on people's intuitions).
We should probably remove all those "boldface argument name" things and just introduce the name of the argument in the first sentence of each paragraph. ---- CharlesTGillingham (talk) 00:24, 16 September 2022 (UTC)[reply]
I went ahead and removed the "boldface argument name / indented description" form in a few sections, but looking at it again, I'm not sure if my edits are an improvement. Does anyone have an opinion?
The structure of each section is this:
  • introduction to this kind of reply
  • three or four replies of this form
  • summary with Searle's counterargument against any reply of this form.
Right now, some of the sections use the old format (where each argument gets it's own header, and is indented, so that the un-indent indicates that we're now talking about all these arguments) and some of them are just flat. Which is better? ---- CharlesTGillingham (talk) 01:11, 16 September 2022 (UTC)[reply]
Okay, I have given all the replies and a consistent format. It might not be the best, but at least it highlights the structure of the dialog. --- CharlesTGillingham (talk) 17:06, 10 July 2023 (UTC)[reply]