Talk:Statistical inference

Page contents not supported in other languages.
From Wikipedia, the free encyclopedia

Merge?[edit]

Please merge the pages 'statistical inference' and 'Inferential statistics' , but be aware that the page on statistical inference has more concice information than the other page.

Thanks for making this article. I've been wanting it for a long time. It has a bearing on environmental issues, because the case for a global warming catastrophe or ozone depletion is entirely based on statistics. User:Ed Poor

(William M. Connolley 21:49, 6 Nov 2003 (UTC)) Actually thats not true. In the case of ozone, the obs and theory are suffiently good that statistics aren't needed. GW has theory, modelling and obs too, but stats are important too.

Thanks for the information on this page. I'm not in favor of merging this page with Inferential statistics or statistical induction. I can see how the statistical inference page compliments the inferential statistics page, but for myself... I needed the information on this page (statistical inference) alone and do not at this time need the depth of information on the other page (inferential statistics). [User: C. Lawson]

Why is that an objection to the proposed merger? No one's proposing getting rid of this information---just incorporating some of it into that other article, and maybe putting some of it somewhere else. Michael Hardy 22:55, 4 October 2006 (UTC)[reply]

Types/Schools[edit]

There are actually three schools of statistical inference: Bayesian, Fisherian and frequentist

Source: Essentials of Statistical Inference (Cambridge Series in Statistical and Probabilistic Mathematics) by G. A. Young (Author), R. L. Smith (Author) —Preceding unsigned comment added by 80.171.193.219 (talk) 20:26, 31 May 2008 (UTC)[reply]

These authors state a falsehood. Look at JASA 2000 for e.g. information theoretic statistics, which has featured contributions by Kolmogorov, Turing, etc., and explain into which of these pigeon-holes information-complexity statistics falls, please!
What is Fisherian statistics? Is that "fiducial statistics" which collapses outside of exponential families (Lindley, JRSS around 1955 or so) or the subjective probability-model "theory" which was ridiculed by Frank Ramsey and Rudolf Carnap? Kiefer.Wolfowitz (talk) 17:25, 25 February 2010 (UTC)[reply]

needs revision[edit]

The distinction between Bayes/frequentist and objective/subjective seem largely unhelpful; see e.g. Christian Robert's book for a strongly argued statement that there's no such thing as objective analysis - ultimately, someone has to choose what to do.

It might be more useful to discuss distinction between non- semi- and fully-parametric inference, the distinction of which is orthogonal to the Bayes/frequentist choice. —Preceding unsigned comment added by McPastry (talkcontribs) 21:06, 28 February 2010 (UTC)[reply]

Dear Brother McPastry, I am delighted to find another statistician that wants to reduce the schismatics between Bayesian, not necessarily Bayesian, or anti-Bayesian statistics, as presented on WP.
However, the "anti-objectivity" statement of Robert cannot be viewed as mainstream statistics. It contradicts the ISI Code of Ethics, and the curriculular guidelines of the ASA, at least, and the statements of leading textbooks. Kiefer.Wolfowitz (talk) 21:21, 28 February 2010 (UTC)[reply]
Mcpastry first removed the recomendations favoring objective randomization (the leading textbooks and on the professional guidelines of statisticians, in the USA and the world) because of his objection to "objectivity", etc.. Now McPastry has again removed those references, without responding to the points---especially regarding the need to cite reliable sources.
McPastry, please seek consensus on the TALK page before doing such unilateral editing.
I hadn't noticed this, what obviously must be a joke, before. I say "joke" because of the unilateral way that KW destroyed the pre-existing article without seeking consensus. Melcombe (talk) 09:36, 5 March 2010 (UTC)[reply]
This is pretty hilarious, Melcombe, given your destruction of the "fiducial inference" article, which was written after reading about and apparently being able to verbalize some of the fallacies of fiducial probability and inference. You inserted the same nonsense in this article and then defended it, despite (in some sense) knowing better. Again, your tolerance for cognitive dissonance exceeds mine. Kiefer.Wolfowitz (talk) 09:57, 5 March 2010 (UTC)[reply]
But it wasn't me who destroyed an article without consulting on the talk page and then demanded that everyone else should do so. I see that your inability to cite without requiring telepathy of the reader and future editors has now infected "fiducial probability" as well as the other articles you have touched.Melcombe (talk) 12:38, 5 March 2010 (UTC)[reply]
IMHO, this editing is improper unless I have mis-represented the ASA and ISI guideliens or the textbooks. If you want, you can cite Robert or Lindley that "objectivity" is misleading, etc., but please label that view as contrary to the professional guidelines and the mainstream literature.
I would consider if a courtesy if you would restore the mainstream recommendations in the introduction. Thanks, Kiefer.Wolfowitz (talk) 21:42, 28 February 2010 (UTC)[reply]
I ain't no 'Brother', brother. 'Objective' is unhelpful to the lay reader, it suggests that anything else is unfair, or somehow tainted by vested interests. As someone, somewhere, has to choose what analysis to do - subjectively - then use of the term is unhelpful and confusing. I have formatted some of the rest of the text to incorporate different types of models, and different modes of inference - all combinations of which are possible. McPastry (talk) 01:21, 1 March 2010 (UTC)[reply]

Randomized data versus playing with data sets[edit]

A randomization test uses the randomization described in the experimental protocol and implemented, and is not assumed. Similarly, in sampling, the random sampling scheme defines a distribution of the sample statistic.

In contrast, in nonparametric statistics, a permutation test plays with the data regardless of its origins. If a supposed distribution is nice but otherwise nonparametric, then permutation tests characterize best tests (Lehmann's TSH; c.f. Paul Rosenbaum's book on observational studies). Unfortunately, observational data are not i.i.d. but d.d.d., dependent and differently distributed (Freedman, Statistical models), so the relevance of permutation tests on observational data depends on implausible assumptions.

McPastry repeatedly has removed the description of randomization procedures for randomized data (data from randomized experiments or random samples) and substituted permutation tests for "data sets". I have asked McPastry repeatedly to stop such editing. Kiefer.Wolfowitz (talk) 09:14, 2 March 2010 (UTC)[reply]

Neutral tone versus pejorative language[edit]

I have repeatedly edited KW's comments, which imply that randomization is the only defensible approach to inference - see comment on "playing with data sets" above. There are several approaches to inference, all of which merit impartial and neutral-toned discussion. McPastry (talk) 19:50, 2 March 2010 (UTC)[reply]

Agreed. Basic considerations are at WP:NPOV. Melcombe (talk) 10:05, 3 March 2010 (UTC)[reply]
No statement written by me implies that "randomizaton is the only defensible approach". Please quote one offending text, McPastry or Melcombe (or both)!
On the contrary, I clarified the "nonparametric" use of permutation tests, citing Lehman (and Rosenbaum), so McPastry's claim is obviously false.
Rather than claim that randomization is the only defensible approach, I have consistently written that
  • (1) randomized procedures are preferred, citing reputable sources (ASA guidelines, Moore & McCabe, etc.).
  • (2) permutation tests are most relevant when the data come from randomized procedures (in which case, they are termed "randomization tests"), citing Lehmann. The relevance of permutation tests to non-randomized data is questionable (Hinkelmann & Kempthorne. Basu's paper on the Fisher randomization test could be cited, along with the discussion).
McPastry has failed to cite one reliable source opposing propositions (1) and (2).
There are several approaches to arithmetic, including statements like 1+1=3. However, it is hard to find reliable sources supporting 1+1=3, and easy to find reliable sources supporting alternatives, especially the alternative that 1+1=2.
Regarding "perjorative" phrasing:
  • The scare-quotes on "statistician" were in the cited text. See Hinkelmann and Kempthorne, Exercise 6.3 (page 193 in first edition).
  • "Playing with data" occurs in Basu's paper on the Fisher randomization test.
  • It may be useful to consult a dictionary on "foolhardy" (which is gentler than a reliable source, John Tukey, "damned fool", whose exact citation I couldn't find via Google.)
Kiefer.Wolfowitz (talk) 11:53, 3 March 2010 (UTC)[reply]

Progress[edit]

I thank the other editors for recent improvements, which try to preserve the truth of others' (particularly my) statements (while trying to correct errors).Kiefer.Wolfowitz (talk) 14:50, 3 March 2010 (UTC)[reply]

more edits[edit]

KW continues to insert warnings into neutral-toned descriptions of what assumptions are. Validity of assumptions is a good topic, but it can't help the reader (who's reading e.g. a description of three levels of assumption) to distract them at the first bullet point with concerns over SRS-veracity. So I moved them.

Much of the rest of this article (e.g. all the randomization stuff) heavily accentuates discussion/criticism of methods, rather than elucidation of what the methods actually are, and how they are justified. McPastry (talk) 00:44, 4 March 2010 (UTC)[reply]

McPastry and Melcombe have again asserted that statisticians can assume that data arose from simple random sampling --- Where is the quote from David Cox (whom McPastry cites for authority) that allows an iid assumption to be made (without subject matter knowledge or randomization) for inductive inference (as opposed to abductive hypothesis generation)? McPastry and Melcombe have altered the text so that it read that the population's normality is a consequence of the central limit theorem (rather than the normality of the sample mean). This is incompetent statistics, which violates the ISI code of ethics, especially since one or both of you label yourselves statisticians.
McPastry has deleted referenced statements to the contrary, made by leading statisticians, with precise citations, and has failed to strive for consensus. This editing violates Wikipedia policy.Kiefer.Wolfowitz (talk) 14:42, 4 March 2010 (UTC)[reply]


It would be good if KW would do a number of things: (1) learn what Wikipedia is supposed to be ... it is specifically not a "statement of best practice" and not a place to impose particular viewpoints, as was done in the deletion of (nearly) all mention of fiducial inference simply because KW doesn't like it's being mentioned ... start from WP:POV as previously suggested; (2) remember what this article is about ... its title and subject is "Statistical inference" ... there are separate articles on statistics, statistical models and statistical assumptions where things that are not directly relevant to "statistical inference" can be placed if they are worth saying at all; (3) find out how and when to include citations, in particular as accessible sources that can be sought out to confirm that someone else has made the point or other confirmation of formulae etc. ... it need not be to the first person who made the point; (4) resist the urge to include references by Peirce in every article he edits ... it can't always be necessary to include 5 or 6 of such references.
So now KW, tell us exactly where you think Cox is being cited as a justification for advice to take any particular approach. Melcombe (talk) 16:39, 4 March 2010 (UTC)[reply]
Melcombe raises many points. (1) In particular, Melcombe exagerates my editing about fiducial inference. Indeed, I provided the Lindley reference (whose citation he improved greatly) and described the limitations of fiducial inference. Indeed, I also provided all the references to serious statistical work that had been inspired by Fisher's writings on "fiducial inference".
So let's discuss these references to Peirce. The text says "Objective randomization allows properly inductive procedures", and then lists 6 publications by Peirce to back-up this statement. Is KW saying that all 6 of these are needed for this purpose? And that the same 6 are needed in all the other articles he has inserted this set of publications? Melcombe (talk) 12:46, 5 March 2010 (UTC)[reply]
It's a waste of space to discuss the original, since Fisher's "theory" collapses --- an hour's reading should establish that this statement is consensus. It's unfortunate that Melcombe still has failed to correct the damage he inflicted on the Fiducial inference article's lead sentence, even though he's apparently read about the failings of Fisher's fiducial system. (I have less tolerance for cognitive dissonance.)
(2) Elementary logic suggests the problems with false hypotheses: In many systems, one can prove anything via a false hypothesis. Unless this article presents itself as an article in pure mathematics, then some discussion of premises is important, especially since leading authorities suggest that the use of incredible assumptions distinguishes statistical incompetence.
(3-4) Statisticians Rao, Stigler and Kempthorne and logicians von Wright and Hintikka and philosopher Hacking have argued that Peirce is a central figure in statistics, especially in statistical inference. Unlike Fisher, these sources can use "sufficient" and "necessary" correctly in the fourth editions of their books! Unless Melcombe cites a reliable source contradicting these (reliable) sources, his personal feelings about citing Peirce are irrelevant. Kiefer.Wolfowitz (talk) 17:10, 4 March 2010 (UTC)[reply]
So you have forgotten your edit summary 'remove "fidual inference" with belongs with "flat earth" etc. theories'. It is obvious that any encyclopedia article on statistical inference must say at least a modest amount about it. The fact that there is consensus that it can't be made to work can be mentioned, but is itself irrelevant to the question of whether it should appear. Recall what Wikipedia is. Melcombe (talk) 17:39, 4 March 2010 (UTC)[reply]
Melcombe's understanding of English puzzles me---especially Melcombe's misuse of "always" and "every" and "must" and "necessary" --- the last word gave Fisher trouble, also, as I noted before (in a remark I gently hid before Melcombe's latest contribution, but which is now shown).
Like the flat-earth theory, "fiducial inference" made claims that were false. There are many such failed systems of "inference", which are not mentioned. Where is the discussion of "fiducial inference" (or "flat-earth theories") in JASA 2000? That review of statistics had discussions on information complexity, which Melcombe neglected to discuss while he inserted "fiducial inference" in yet another article. Kiefer.Wolfowitz (talk) 17:47, 4 March 2010 (UTC)[reply]

The new intro is a nice improvement. I agree with Melcombe about scope (and much else). Some issues I think could be cleared up

  • Saying non-parametric means "very few" assumptions seems too vague; one could say that assuming a classical linear model is just one assumption. I appreciate that it looks very abstract, but the finite-dimensional restriction is in van der Vaart and an Cox [Principles book] - and probably lots of other places too. Perhaps a little more in the way of examples would help the reader?
  • Simple random sampling is an assumption that can be made in fully- semi- and non-parametric models. So it's perhaps not best to state it as the sole example in the non-parametric bullet.
  • The randomization-based models section should start with something to orient the reader. For example, "An important experimental situation where strong assumptions are well-motivated is the randomized study". Then say how inference proceeds. Then compare randomization-based inference to other approaches.
  • "warrants" is nowhere near the right word.
  • False models and pi=22/7 is a bizarre comparison. Anyone can see 22/7 is not Pi. It would be an overstatement to ridicule the assumption of classical linear models because they're not true. (n.b. I would rather not include the Box mantra on models and usefulness, it's hackneyed and quoted as support by people with very different stances)
  • Information and computational complexity - doesn't mention inference anywhere, so not obviously relevant
  • Fiducial/Lindley - Lindley (1958) didn't show fiducial inference "fails", he showed that it didn't correspond to a Bayesian procedure - this isn't quite the same. Don Fraser has written about this (well). McPastry (talk) 21:35, 4 March 2010 (UTC)[reply]
I can respond to only the most important comments, which I have numbered for convenient reference. Your tone & comments are constructive.
I agree with comments 1-4: However, regarding (1), it is a bad idea to introduce nonparametrics with "infinite dimensions" in Wikipedia, because you will scare most of the audience.
"very few" doesn't capture the right idea. Copying whatever Cox uses seems pragmatic. One could also re-order the non- and fully- sections. McPastry (talk) 01:06, 5 March 2010 (UTC)[reply]
Using 3 and 22/7 seem good examples, that show a false-hood can be a good approximation and useful for making decisions, although it can be improved with the scientific method, leading to the truth as the limit of so reasoned belief: this example is simpler than Peirce's example of iterative methods (e.g. Newton). ;) It's certainly doomed as original research, since I don't know of a reference. This example was another example of my striving for a neutral point of view, rather than just signalling skepticism about model-based inference, if I may say so.
the discussion in the article says nothing about "good approximation for making decisions"; as currently written it is not a useful analogy. Also, the situation in modeling is complicated by e.g. classical linear models giving valid-if-conservative tests for population parameters, in many real-world circumstances. Using pi=3 or 22/7 doesn't capture this, there isn't a population-parameter interpretation to the area of a circle. McPastry (talk) 01:06, 5 March 2010 (UTC)[reply]
Richard Berk's book is a good accessible reference for bogosity of models. It does, however, largely assume that one really wants to know about a true underlying finite-dimensional parametric model. McPastry (talk) 01:06, 5 March 2010 (UTC)[reply]
Regarding (6) Rissannen certainly does discuss "inference" in his little green book. I don't have time to add "inference" now or cite the pages in the articles referenced. I'll try to do it Friday or this weekend. In any event, information-theoretic and Kolmogorov-complexity have shaken up even English "statistical inference", whose obsessions with Neyman and Fisher have long recognized as unhealthy, eg. by Cox.
look forward to some inferential content here, then McPastry (talk) 01:06, 5 March 2010 (UTC)[reply]
Regarding Lindley (7), tonight I cited Lindley's interview in Statistical Science about the non-additivity of so-called "fiducial probability". Previously, I did not specify the exact article in JRSS, but described the exponential-family result and suggested a year: I believe that Melcombe specified the currently cited article --- maybe he chose the wrong one, because I don't recognize that title: I believe that Cox's Principles cites the appropriate Lindley paper. Again, I'll try check in the next few days. Thanks! Kiefer.Wolfowitz (talk) 21:56, 4 March 2010 (UTC)[reply]
If you didn't previously specify a title, perhaps saying that Melcombe "chose the wrong one" lacks diplomacy? McPastry (talk) 01:06, 5 March 2010 (UTC)[reply]
I don't like making mistakes and I checked the reference, which was indeed the article I intended. You are correct that a Bayesian posterior is required, which would exclude some generalized Bayesian procedures needed to characterize admissibile procedures, but otherwise includes much of statistics. Kiefer.Wolfowitz (talk) 22:10, 4 March 2010 (UTC)[reply]
I hid the Lindley JRSS reference in this article, which wasn't so clearly exciting after all---thanks for the discussion. (I never introduced the Lindley JRSS reference in the "fiducial inference" article.) However, I leave the Lindley JRSS article, which may be of interest to Bayesian or fiducial fellow-travelers, because it was cited by Cox's recent Principles, which is notable and reliable enough for me (on this point!). Thanks! Kiefer.Wolfowitz (talk) 22:21, 4 March 2010 (UTC)[reply]
McPastry, I speculated whether Melcombe had "maybe" chosen the wrong article. Wikipedia policy on Talk pages requires that substantial comments be retained, even when an editor (like myself here) would be tempted to hide speculation. WP policy also frowns on inserting comments out of time order.Kiefer.Wolfowitz (talk) 08:16, 5 March 2010 (UTC)[reply]
Well it would certainly help if you could be bothered to supply a proper reference when you are inserting a citation ... after all, Wikipedia policy (which you are so keen on making up as you go along) is that you should be looking at the reference at the time you are editing, so it should be easy. Melcombe (talk) 12:51, 5 March 2010 (UTC)[reply]

I did look at Wikipedia articles on scientific theories, often by scientists with path-breaking work in other areas, that are widely dismissed by other scientists. The term for theories like fiducial probability is fringe science. I apologize for bringing up the example of flat-earth theories, which seems to have provided less stimulation to science than has fiducial inference, whose influences on Barnard etc. were noted on Wikipedia in this article. Kiefer.Wolfowitz (talk) 23:40, 4 March 2010 (UTC)[reply]

"Said to be fallacious"[edit]

Melcombe, can you find a reliable source that states that Fisher's argument in 1955 wasn't fallacious? The fallacy was dissected by Neyman in JRSS 1956. Does the article on ontogeny recapitulates phylogeny state that "this theory is sometimes said to be false"?!Kiefer.Wolfowitz (talk) 15:40, 12 March 2010 (UTC)[reply]

Oh? Are you now saying that it is not said to be fallacious? In the end it is just one person's opinion and readers can make up their own mind. Even Zabell said that early version of the Fisher approach were indistiguishable from Neynan's, so it can't be that all versions of a fiducial approach are fallacious, otherwise Neyman's would be as well. Melcombe (talk) 16:20, 12 March 2010 (UTC)[reply]
Fisher's last descriptions of fiducial inference involved conceptual confusions and obvious fallacies, in fact, not just in Neyman's reading.
Both Neyman and Fisher agreed that their conceptions were different and both viewed the other's as wrong-headed. Your edits now suggest substantial agreement (for early versions of Fisher's "theory", which even he discarded!). Kiefer.Wolfowitz (talk) 16:35, 12 March 2010 (UTC)[reply]

areas for improvement[edit]

Some things I intend to work on (and would appreciate comments on);

  • Randomization-based models: this currently doesn't start with a description of what randomization-based inference is. Commentary along the lines of "a good observational study may be better..." seems unnecessary, to me
  • Randomization-based models: discussion of whether or not there is a true model seems of marginal interest (there's not, yet the world doesn't end because of it). Pi and 22/7 seems an unhelpful analogy. Also, more limitations of randomized trials might be acknowledged.
  • Information and computational complexity: looks okay, but the comparison to e.g. Bayes is redundant - it's mentioned elsewhere in the article, as are its strengths and weaknesses
  • Fiducial Inference: this never says what fiducial inference actually is. I don't see why a reader would care that Fisher couldn't settle on a definition of something that hasn't been even vaguely described in this article.

Hope you are similarly minded. I intend to continue to remove pejorative language, which is inappropriate in an encyclopedia entry. McPastry (talk) 07:12, 16 March 2010 (UTC)[reply]

Many or most of your recent edits are improvements imho. However, ...
A. You did remove the statement that limiting results are irrelevant to finite samples, which is nearly a quotation from Kolmogorov and Le Cam (many times in his book); Pfanzagl's recent textbook says the same thing. Being the world's leading authority on probability and one of the world's leading logicians, Kolmogorov was a better authority on probability and logic than looking at a recent "statistical journal"; Being the architects of approximation-theoretic and limiting statistics, Le Cam and Pfanzagl are better authorities on asymptotics than is "looking at a recent statistical journal". You used to cite van der Vaart's textbook, which acknowledges the (obvious) priority and authority of Le Cam and Pfanzagl. (I do not object to your removing the "n < infinity" inequality, etc.)
It may be nearly a quotation, but it didn't read like that. It read like a bald claim that asymptotic approximations are worthless unless n = infinity, which sounds very like "letting the perfect get in the way of the practical" (to quote Efron) and is unhelpful. Rather than 'nearly quote' fairly advanced texts, it would be better to explain what the authors mean, in the simplest terms possible... all without belittling other approaches, of course. McPastry (talk) 06:17, 17 March 2010 (UTC)[reply]
On the distinction between limiting results (okay) and approximation results (good).
  • A limiting theorem has the form "for every epsilon, there exists some n such that good things happen"; in statistics, such results often are proved via arguments like "assume that there exists a sequence of consistent score estimators and suppose the conclusion fails --- then a contradiction occurs". Such arguments are wildly nonconstructive and irrelevant to finite samples. Kolmogorov, Le Cam and Pfanzagl are right.
  • In contrast, approximation results hold for some definite (finite) sample size (described in terms of the parameters), and often for all n. (C.f. Kantorovich's "Functional Analysis and Applied Mathematics" where this distinction is stressed.)
Nobody denies that many asymptotically nifty procedures perform well for moderately sized samples (even without having an approximation result proved): On the contrary, I added the precise statement by Hoffman-Jörgensen about the role of experience and simulations in evaluating methods. (Pfanzagl's books have lots of simulation studies, as well as very precise asymptotics.) The objection is to the illegitate inference from limiting results to some fixed sample, without referring to simulation studies, experience, etc. Kiefer.Wolfowitz (talk) 23:11, 17 March 2010 (UTC)[reply]
I rewrote a short version of the distinction between approximation results and limiting results. The footnote now quotes Kolmogorov, Le Cam and Pfanzagl on the irrelevance of limiting results to finite samples, and now gives precise page references. Kiefer.Wolfowitz (talk) 22:27, 21 March 2010 (UTC)[reply]
I've edited this to remove inflammatory statements about limiting results being "irrelevant". Limiting results - while certainly not providing inference on their own - do provide the basis for approximations that people use in practice, so they 'are' relevant to the inferential process. Moreover, for many applied uses of multivariate regression, in practice the degree of approximation to the true sampling distribution is by far not the biggest problem with inference... calling GMoM or GEE (or their justification) "irrelevant" is not appropriate. McPastry (talk) 03:49, 25 March 2010 (UTC)[reply]
I ask that you respond below. You removed my statement about irrelevance when it was cited with references to Kolmogorov's article and LeCam's book (but not quotations or page number): In fact, your editing dismissed the sentence with the comment "Look at any statistical journal to see why", which seemed risky-behavior given the citation of Kolmogorov and Le Cam (and given my record of editing!). You later removed the statement about "irrelvance" after I had provided precise citations with page numbers and the relevant quotations. On the question of "irrelevance" your editing has been very point-of-view, imho.
You are correct that some expansions are useful in practice. I would welcome your adding something about Taylor-series expansions of nice functions (or Padé approximations), etc. and their use in practice: See Pfanzagl's LNMS monograph for nice examples, which also warn against fallacies. Kiefer.Wolfowitz (talk) 16:57, 27 March 2010 (UTC)[reply]
B. "Fiducial inference" is notoriously only a collection of (one-parameter) examples --- see Neyman. I would welcome your deleting the whole subsection from this article, because even recent neo-Fisherians (Davison, etc.) think that its sun has set. (The "See also" section could list the article on Fiducial inference of course.)
No objection to trimming this down a bit, though I feel it merits some discussion, as it's probably the best-known paradigm outside of Bayes and frequentism, whether or not its "sun has set". I think a description of the general idea - even if only for one-parameter problems - would greatly help the reader. McPastry (talk) 06:17, 17 March 2010 (UTC)[reply]
This statement about fiducial being the 3rd most popular is false. Compare the number of researchers working on information-theory and those working on "fiducial" inference, which was ignored by JASA 2000, etc. It may be true that fans of David Cox and John Nelder (likelihood "wallahs" in Basu's terms) are arguably more popular: Is that what you mean? Kiefer.Wolfowitz (talk) 23:16, 17 March 2010 (UTC)[reply]
As I'm sure you read, I did say "probably". I don't believe there is hard evidence either way on the numbers of researchers or amount of work, or indeed that any hard-and-fast ordering is available or useful. I mean that lots of (educated, smart, reasonable) people are aware of fiducial inference as a form of statistical inference, and as such it merits some discussion in this encyclopaedia entry. By stating that my statement was "false" you, again, appear to lack a sense of diplomacy, and of proportion. McPastry (talk) 04:16, 19 March 2010 (UTC)[reply]
Please criticize behavior rather than persons.
I was criticizing your commenting that my statement was false. Why do you persist in arguing over this? We agreed that the fiducial inference section could be trimmed. McPastry (talk) 03:31, 25 March 2010 (UTC)[reply]
Your conclusion doesn't follow from the premise. Plenty of educated, smart, reasonable people are aware of Laplacian principle of indifference (giving probability one-half to a proposition in a state of ignorance), monkeys typing at keyboards, etc. as forms of statistical inference, also --- to say nothing of tarot cards and ouija boards. Thanks Kiefer.Wolfowitz (talk) 16:00, 21 March 2010 (UTC)[reply]
It's an encyclopedia; entries are there by judgement, not by logical deduction. Also, I find your allusions to monkeys, tarot and ouija insulting, and way out of proportion. McPastry (talk) 03:31, 25 March 2010 (UTC)[reply]
I'm sorry that you feel insulted. I had wished rather that you should have recognized the fallacy of your argument from its ability to generate such conclusions, and so that you should have been motivated to formulate a viable argument (or withdraw your position). Kiefer.Wolfowitz (talk) 19:02, 27 March 2010 (UTC)[reply]
I don't recognize any fallacy in what I wrote (which, as you'll recall, was that it seems worthwhile to me that the article mention fiducial inference, briefly). If that isn't your opinion, fine. Why are you trying to make this a bare-knuckle epistemological debate? McPastry (talk) 20:49, 27 March 2010 (UTC)[reply]
My statement
  • "Your conclusion doesn't follow from the premise. Plenty of educated, smart, reasonable people are aware of Laplacian principle of indifference (giving probability one-half to a proposition in a state of ignorance), monkeys typing at keyboards, etc. as forms of statistical inference, also --- to say nothing of tarot cards and ouija boards."
was intended to clarify the error in your statement:
  • "I mean that lots of (educated, smart, reasonable) people are aware of fiducial inference as a form of statistical inference, and as such it merits some discussion in this encyclopaedia entry."
The acceptable conclusion (keep some mention of ficucial inference) may follow from good premises, not one leading to bad conclusions. For example, discussing "fiducial inference" may alert innocent youth that Fisher committed many logical fallacies and mathematical blunders, and that they should think twice before partaking in the initiation rites of the Fisher cult. (C.f. Areopagitica's argument for the benefits of reading fallacious arguments and wrong positions.) ThanksKiefer.Wolfowitz (talk) 21:25, 27 March 2010 (UTC)[reply]
Why are you so fussed about the statement of an opinion? "I feel it [fiducial inference] merits some discussion" was where we started. I still feel that way. The fact that other people wrote up fiducial inference for this article suggests I am not alone. Accusing me of falsehoods, fallacies, and making other insulting comments helps no one. Please stop. McPastry (talk) 23:38, 27 March 2010 (UTC)[reply]

"Irrelevance" of limits for finite samples[edit]

Editor McPastry has labeled the word "irrelevant" as inflamatory and inappropriate to this article, and has repeatedly removed the word "irrelevant" despite its being used in citations from Kolmogorov and Le Cam. This is point-of-view editing. I ask that McPastry cite a reliable source explaining why Le Cam and Komogorov are wrong to use "irrelevant", and otherwise stop point-of-view censorship. Thanks. Kiefer.Wolfowitz (talk) 16:49, 27 March 2010 (UTC)[reply]

My concern is that readers confuse the meaning of this term in logical deduction with its lay meaning - where relevance means having "significant and demonstrable bearing upon the matter at hand" (Merriam Webster). So, saying that e.g. asymptotic results are 'irrelevant' to finite samples suggests they have no important place in the analysis of any actual inference based on real data (which must have finite sample size). Such a statement would be far too strong; these results get used all the time, successfully, and demonstrably - look in any applied stats journal.
These results are invoked often, agreed. However, a limiting result is irrelevant to the problem at hand, while approximation results and simulations are relevant. I would suggest your reading Philip Wolfe's paper on "A universal algorithm for optimization" (Math. Programming c. 1973) for clarification about the distinction between limiting results and finite computations/observations. Kiefer.Wolfowitz (talk) 19:28, 27 March 2010 (UTC)[reply]
Please read my entry above. You are interpreting 'irrelevant' in its strict mathematical sense (and I'm not disagreeing with what you say). But the quotation of Le Cam in the article states that limit results are relevant in the lay sense. Hence, disambiguation is required. McPastry (talk) 20:00, 27 March 2010 (UTC)[reply]
OK. The resulting text seems much improved over either of our earliest versions. Thanks. Kiefer.Wolfowitz (talk) 20:35, 27 March 2010 (UTC)[reply]
I have suggested 'formally irrelevant' as a compromise term. 'Logically irrelevant' could also work. Regarding Kolmogorov/Le Cam/any other great probabilist or statistician you admire, please contain the biographical information to references. The presence of a citation denotes that the statement made has support; the flow of the text's description about e.g. what limiting results provide is badly disrupted by the insertion of reasons why these specific authors are being cited. McPastry (talk) 19:21, 27 March 2010 (UTC)[reply]
Okay, the last round of editing retained the word "irrelevant". I dislike the weasel word and redundancy "formally", but I'm tired of arguing and will let it stand. Thanks. Kiefer.Wolfowitz (talk) 19:08, 27 March 2010 (UTC)[reply]
See above for why. I dislike your description of my text as weasel words. McPastry (talk) 19:21, 27 March 2010 (UTC)[reply]

Barnard's influence[edit]

I removed the following unscourced statements:

Barnard's work has inspired further work by D. A. Fraser, P. Dawid, Per Martin-Löf and Steffan Lauritsen on sufficient statistics and statistical models.[citation needed] In the case of "structural" influence, there seem to be few applications outside of group-transformation families and one-parameter exponential families. Work by Barnard and Harold Jeffreys has also influenced approaches to "objective" Bayesian inference by George E. P. Box and E. T. Jaynes.[citation needed] Box used "locally" objective priors; Jaynes proposed maximum entropy prior distributions. Many of these approaches are related to harmonic analysis, in particular to the theory of representations of semigroups.[citation needed]

Thanks. Kiefer.Wolfowitz (talk) 21:07, 11 June 2010 (UTC)[reply]

Was that the tagged synthesis? If not it should be pointed out, if so the flag removed. 72.228.189.184 (talk) 19:13, 24 May 2012 (UTC)[reply]
The synthesis tag is dated March 2012, later than 11 June 2010. The synthesis relates (throughout the article) to various opinions being expressed for which there are no sources given (which should be given, otherwise it is OR) in relation to a number of published works, for which citations are given, but this is done in such a way as it looks as if the citations back-up the opinions. What is needed are some published summaries of the whole field (or wide subfields) of statistical inference (and its development) that can be used as sources for statements about importance, development, relevance, etc.. The tag was originally placed because of a more blatant synthesis which has been removed & replaced by something presumably better, but overall the problem seems to remain, possibly at a lower level. Melcombe (talk) 19:59, 24 May 2012 (UTC)[reply]

The following may be original research, so (according to wp:nor) I don't include it in the article, but I have found it useful and enlightening.

An unknown number may be estimated by an order of magnitude, which is the mean value of the probability distribution, and an uncertainty, which is the standard deviation of the distribution.

If the uncertainty is zero then the result is certain.

Deduction is estimating a part from the whole.

Induction is estimating the whole from a part.

Prediction is estimating from one part to another.

The following J expressions apply.

      deduce =: %~`*/"2@(,:(%:@*-.))@(+/@[%~1,,:)
      predict=: (deduce~-@>:)~
      induce =: (,:0:)+[predict(-+/)~

Example. Based on a test the pupils in a school class are classified into categories.

      a =: 20 5 0 NB. There are 20 pupils in category A, 5 in category B, and 0 in category C.

The first line of the following answers contains the orders of magnitude, and the second line contains the uncertainties.

      a deduce 0  NB. certain deduction when the sample is empty
0 0 0
0 0 0
      a deduce 25 NB. certain deduction when the sample is the whole population
20 5 0
 0 0 0
      25 0 0 deduce 10 NB. certain deduction when all pupils are in the same category
10 0 0
 0 0 0
      a deduce 10 NB. uncertain deduction in the general case
8 2 0
1 1 0
      a induce 1000 NB. How are the 1000 pupils in the whole school distributed?
751.25 213.929 34.8214
79.516 75.3499 34.0783
      a predict 25 NB. How are 25 pupils in another class distributed?
  18.75 5.35714 0.892857
2.92691 2.77356  1.25439

Prediction is generally somewhat uncertain.

      a predict 0 NB. trivially certain prediction
0 0 0
0 0 0
      0 0 0 predict 3 NB. uncertain prediction
1 1 1
1 1 1
      0 1 predict 3 NB. more information gives better predictions
1 2
1 1 
      0 0 0 996  predict 4 NB. plenty of information gives good predictions
    0.004     0.004     0.004    3.988
0.0633086 0.0633086 0.0633086 0.109544

The deduce program is the well known formula for mean value and standard deviation of the multivariate hypergeometric distribution. The possibly original research is the observation that the prediction formula follows from the deduction formula by increasing by one and changing sign.

      a
20 5 0
      -(1+a)
_21 _6 _1
      (-(1+a))deduce 25 NB. this is how prediction works. Amazing!
  18.75 5.35714 0.892857
2.92691 2.77356  1.25439

Bo Jacoby (talk) 10:41, 18 December 2012 (UTC).[reply]