Skip to content

Reading, practicing, talking, and questioning

Roger Henke writes:

I have somewhat of a background in broad strokes policy research. My knowledge of research methodology and stats is very limited and in hindsight I am quite flabbergasted by some of what I’ve claimed in the past based on questionable to say the least data and approaches and equally so by the lack of scrutiny those data and approaches were subjected to by peers, journals, and funders/commissioners of the ‘research’. The blog is a great reminder of those limitations, and of all that is wrong with the way a lot of social science is done. Quite a lot of the posts are beyond my technical understanding but as teachers of humility, open-mindedness and dealing with information labelled social science I value the blog greatly,

Another blog that I subscribe to just published this piece.

Given the intended audience of this blog, this kind of advice regarding statistics feels so utterly wrong that I am starting to wonder if all the fuss about change-finally-has=come of the last decade is an info bubble impression, totally disconnected from reality. Am I reading this advice the wrong way?

I took a look at post in question, by Danielle Bodicoat, which is called “How to get confident with statistics” and gives the following suggestions:

You are more than capable of using statistics

1. Know that we all Google stuff

2. Read the statistics analysis sections of papers

3. Use statistics as often as you can

4. Practice talking about statistics

5. Ask questions

I then replied to Henke, asking why he thought the advice there is so wrong? It seems ok to me. I agree that statistics is hard, and she could emphasize that—it could be misleading to say “It’s that simple,” etc.—but the specific advice to look things up, practice a lot, etc., seems reasonable.

Henke responded:

Yes, you are right, about her specific advice, but it is all based on the assumption that some stats courses during one’s coursework for becoming a research in social science subject X are sufficient to get a good grasp of the basics, good enough to be able to really understand if all kind of wizardry that programs like SPSS allow you to do is allowed, in the sense of your data etc etc conform to the assumptions underlying the wizardry, in your specific case, and to be able to self teach by way of sources one googles, etc. The Feyman’s of this world may be able to grab a book and master the subject on their own, but I am quite pessimistic about how many of them walk the planet. My reading of the widespread misuse of stats in social science is that it is mostly caused by people – like me – who just don’t know enough to responsibly apply the methodologies and analytic techniques that they are able to mechanically execute. I have come to think of research methodology, including everything to do with being able to properly assess data quality etc. plus analytic techniques to draw inferences from one’s material and the math underlying them as something that requires an educational preparation at the level of a specialized MA. Someone not able to really understand your textbook e.g. at the level of being able to discuss the assumptional rationales and possible alternatives of specific details of it with a fellow researcher/statistician runs considerable risks of misusing the toolbox. An obvious solution is requiring any project to have a stats advisor on board. That in itself would prevent lots of crap being produced. But in my pessimistic opinion one only gets close to an optimal situation when both conversation partners know enough about each others field of expertise to have a real conversation, implying that the methodology/stats advisor knows enough of the field the data come from to really understand the kinds of distributions one may or may not assume, and e.g. knows enough to engage in a discussion about how sensible subject-matter theoretical assumptions are underlying the project, and the subject matter expert knows enough to ask the right questions and is able to understand the suggested techniques in enough depth, when they are being explained, to know what can and cannot be said after applying them.

So yes, her advice to research students makes sense from one perspective: they really need to know a lot more about stats and that requires a lot of study, but I am pretty sure that anyone ‘mastering’ some basics and then going to the SPSS website to find out how to run a regression doesn’t really know what they are doing.

And the one piece of advice that I actually disagree with, given the current state of affairs in the social sciences, is that students should read stats analysis sections of papers to learn what others do (she calls it patterns), and how to write a good one yourself. Most others should start doing something different isn’t it?

To which my reply is: Sure, if you take Bodicoat’s post as saying, “Do these 5 steps and you’ll be a stat wiz,” then, yeah, that would be bad advice. But I don’t see that as her message. Rather, I see the message as, “Like it or not, you’re gonna be doing a lot of statistics, so take it seriously, and start now by reading up and getting lots of practice. Yes, you are more than capable of using statistics, but that doesn’t mean that statistics is easy. Statistics is hard, it’s not cookbook-codified, so if you want to use statistics—and you’re gonna have to—now’s the time to get going and learn.”

There’s always this balance in learning a skill, where you need enough confidence to try it, while still knowing your limitations. I think the steps of reading, practicing, talking, and questioning are a good way forward. The challenge of giving this advice is that you want to empower people without getting them overconfident, which is Henke’s concern.

Leave a Reply