Skip to content
Archive of posts filed under the Miscellaneous Statistics category.

Still cited only 3 times

I had occasion to refer to this post from a couple years ago on the anthropic principle in statistics. In that post, I wrote: I actually used the anthropic principle in my 2000 article, Should we take measurements at an intermediate design point? (a paper that I love; but I just looked it up and […]

“Data in Wonderland”: A course on storytelling with data:

Scott Spencer is teaching this class at Columbia. It looks really cool.

Indeed, the standard way that statistical hypothesis testing is taught is a 2-way binary grid. Both these dichotomies are inappropriate.

I originally gave this post the title, “New England Journal of Medicine makes the classic error of labeling a non-significant difference as zero,” but was I was writing it I thought of a more general point. First I’ll give the story, then the general point. 1. Story Dale Lehman writes: Here are an article and […]

Evidence-based medicine eats itself in real time

Robert Matthews writes:

Guttman points out another problem with null hypothesis significance testing: It falls apart when considering replications.

Michael Nelson writes: Re-reading a classic from Louis Guttman, What is not what in statistics, I saw his “Problem 2” with new eyes given the modern replication debate: Both estimation and the testing of hypotheses have usually been restricted as if to one-time experiments, both in theory and in practice. But the essence of science […]

Can the “Dunning-Kruger effect” be explained as a misunderstanding of regression to the mean?

The above (without the question mark) is the title of a news article, “The Dunning-Kruger Effect Is Probably Not Real,” by Jonathan Jarry, sent to me by Herman Carstens. Jarry’s article is interesting, but I don’t like its title I don’t like the framing of this sort of effect as “real” or “not real.” I […]

When to use ordered categorical regression?

Alex Andorra writes: I was re-reading section 15.5 (multinomial models) of Regression and Other Stories, and this portion page 275 made me curious: Examples of ordered categorical outcomes include Democrat, Independent, Republican; Is this a typo, or can these categories really be considered as ordered in a multinomial model? If this is indeed a typo, […]

He wants to test whether his distribution has infinite variance. I have other ideas . . .

Evan Warfel asks a question: Let’s say that a researcher is collecting data on people for an experiment. Furthermore, it just so happens that due to the data collection procedure, data is gathered and recorded in 100-person increments. (Making it so that the researcher effectively has a time series, and at some point t, they […]

“Do you come from Liverpool?”

Paul Alper writes: Because I used to live in Trondheim, I have a special interest in this NYT article about exercise results in Trondheim, Norway: Obviously, even without reading the article in any detail, the headline claim that The Secret to Longevity? 4-Minute Bursts of Intense Exercise May Help can be misleading and is subject […]

Reading, practicing, talking, and questioning

Roger Henke writes: I have somewhat of a background in broad strokes policy research. My knowledge of research methodology and stats is very limited and in hindsight I am quite flabbergasted by some of what I’ve claimed in the past based on questionable to say the least data and approaches and equally so by the […]

Responding to Richard Morey on p-values and inference

Jonathan Falk points to this post by Richard Morey, who writes: I [Morey] am convinced that most experienced scientists and statisticians have internalized statistical insights that frequentist statistics attempts to formalize: how you can be fooled by randomness; how what we see can be the result of biasing mechanisms; the importance of understanding sampling distributions. […]

Can you trust international surveys? A follow-up:

Michael Robbins writes: A few years ago you covered a significant controversy in the survey methods literature about data fabrication in international survey research. Noble Kuriakose and I put out a proposed test for data quality. At the time there were many questions raised about the validity of this test. As such, I thought you […]

More on that credulity thing

I see five problems here that together form a feedback loop with bad consequences. Here are the problems: 1. Irrelevant or misunderstood statistical or econometric theory 2. Poorly-executed research 3. Other people in the field being loath to criticize, taking published or even preprinted claims as correct until proved otherwise 4. Journalists taking published or […]

Epistemic and aleatoric uncertainty

There was some discussion in comments recently about the distinction between aleatoric uncertainty (physical probabilities such as coin flips) and epistemic uncertainty (representing ignorance rather than an active probability model). We’ve talked about this before, but not everyone was reading this blog 15 years ago, so I’ll cover it again here. For a very similar […]

Confidence intervals, compatability intervals, uncertainty intervals

“Communicating uncertainty is not just about recognizing its existence; it is also about placing that uncertainty within a larger web of conditional probability statements. . . . No model can include all such factors, thus all forecasts are conditional.” — us (2020). A couple years ago Sander Greenland and I published a discussion about renaming […]

Richard Hamming’s “The Art of Doing Science and Engineering”

I bought this charming book and started flipping through and reading bits here and there. It has a real mid-twentieth-century feel, reminiscent of Richard Feynman, Martin Gardner, and Hugo Steinhaus. It gives me some nostalgia, thinking about a time when it was expected that students could do all sorts of math—it kinda made me wish […]

“When Should Clinicians Act on Non–Statistically Significant Results From Clinical Trials?”

Javier Benítez pointed me to this JAMA article by Paul Young, Christopher Nickson, and Anders Perner, “When Should Clinicians Act on Non–Statistically Significant Results From Clinical Trials?”, which begins: Understanding whether the results of a randomized clinical trial (RCT) are clinically actionable is challenging. Reporting standards adopted by JAMA and other leading journals lead to […]

“Not statistically significant” is not the same as zero

Under the subject line, “Null misinterpretation of CIs reaches new level of lethality,” Sander Greenland points us to this article with the following in the Results section: Compared to no masks there was no reduction of influenza-like illness (ILI) cases (Risk Ratio 0.93, 95%CI 0.83 to 1.05) or influenza (Risk Ratio 0.84, 95%CI 0.61-1.17) for […]

The problem with p-hacking is not the “hacking,” it’s the “p”

Clifford Anderson-Bergman writes: On CrossValidated, a discussion came up I thought you may be interested in. The quick summary of it is that a poster posed the question that isn’t Fisher’s advice to go get more data when results are statistical insignificant essentially endorsing p-hacking. After a bit of a discussion that spanned an answer […]

No, I don’t like talk of false positive false negative etc but it can still be useful to warn people about systematic biases in meta-analysis

Simon Gates writes: Something published recently that you might consider blogging: a truly terrible article in Lancet Oncology. It raises the issue of interpreting trials of similar agents and the issue of multiplicity. However, it takes a “dichotomaniac” view and so is only concerned about whether results are “significant” (=”positive”) or not, and suggests applying […]