Skip to content

New textbook, “Statistics for Health Data Science,” by Etzioni, Mandel, and Gulati

Ruth Etzioni, Micha Mandel, Roman Gulati wrote a new book that I really like. Here are the chapters:

1 Statistics and Health Data
1.1 Introduction
1.2 Statistics and Organic Statistics
1.3 Statistical Methods and Models
1.4 Health Care Data
1.5 Outline of the Text
1.6 Software and Data

2 Key Statistical Concepts
2.1 Samples and Populations
2.2 Statistics Basics
2.3 Common Statistical Distributions and Concepts
2.4 Hypothesis Testing and Statistical Inference

3 Regression Analysis
3.1 Introduction
3.2 Trends in Body Mass Index in the United States
3.3 Regression Overview
3.4 An Organic View of Regression
3.5 The Linear Regression Equation and Its Assumptions
3.6 Linear Regression Estimation and Interpretation
3.7 Model Selection and Hypothesis Testing
3.8 Checking Assumptions About the Random Part
3.9 Do I Have a Good Model? Goodness of Fit and Model Adequacy
3.10 Quantile Regression
3.11 Non-parametric Regression

4 Binary and Categorical Outcomes
4.1 Introduction
4.2 Binary Outcomes
4.3 Linear Regression with a Binary Outcome
4.4 Logistic Regression
4.5 Interpretation of a Logistic Regression
4.6 Interpretation on the Probability Scale
4.7 Model Building and Assessment
4.8 Multinomial Regression

5 Count Outcomes
5.1 Count Outcomes
5.2 The Poisson Distribution
5.3 Two Count Data Regression Models
5.4 Poisson Regression for Individual-Level Counts
5.5 Poisson Regression for Population Counts
5.6 Overdispersion, Negative Binomial, and Zero-Inflated Models
5.7 Generalized Linear Models

6 Health Care Costs
6.1 Defining and Measuring Health Care Costs
6.2 MEPS Data on Health Care Utilization and Costs
6.3 Log Cost Models and the Lognormal Distribution
6.4 Gamma Models for Right-Skewed Cost Outcomes
6.5 Including the Zeros: The Two-Part Model
6.6 Beyond Mean Costs

7 Bootstrap Methods
7.1 Uncertainty and Inference in Statistical Models
7.2 The Bootstrap for Variance Estimation
7.3 Bootstrap Confidence Intervals
7.4 Hypothesis Testing
7.5 Summary

8 Causal Inference
8.1 Introduction
8.2 Simpson’s Paradox
8.3 Causal Graphs
8.4 Building a Causal Graph
8.5 Estimating the Causal Effect
8.6 Propensity Scores
8.7 Mediation Analysis
8.8 Potential Outcomes

9 Survey Data Analysis
9.1 Introduction
9.2 Introduction to Health Surveys
9.3 National Health Surveys
9.4 Basic Elements of Survey Design
9.5 Stratified Sampling
9.6 Clustered Sampling
9.7 Variance Estimation and Weighting in Complex Surveys
9.8 Analyzing Survey Data: The Cost of Diabetes in the United States

10 Prediction
10.1 Explaining Versus Predicting
10.2 Overfitting and the Bias-Variance Tradeoff
10.3 Evaluating Predictive Performance
10.4 Cross-Validation
10.5 Regularized Regression
10.6 Tree-Based Methods
10.7 Ensemble Methods: Random Forests
10.8 Summary

And, the best thing: they do all this in only 217 pages!

They forgot a few things (for example not mentioning the divide-by-4 rule when discussing how to interpret logistic regression coefficents; and I think they make a mistake in chapter 5 by going on and on about Poisson regression before finally mentioning overdispersion; and they keep using the exp(…)/(1 + exp(…)) formulation rather than just defining invlogit; and when they talk about matching they don’t mention that it’s not matching or regression, it’s matching and regression; and if they’re gonna mention analysis of surveys I think they should talk about regression; and when they mention regularized regression they mention methods that partially pool coefficients toward zero without mentioning the key role of parameterization in setting up such models (see section 5.1 of this article); and there’s very little on design and sample size and power analysis). But that’s all minor considering all the things they have that I love: they take measurement seriously, they focus on the deterministic rather than the random part of statistical models, they have lots of graphs, and they motivate the methods with real problems. They even explain the log transformation—I only regret that this is buried in the middle of chapter 6 where many teachers and students won’t notice it.

If I could add just a little bit to the book, I’d add a chapter on graphical display of data and models—the authors do this very well, so I’d just like them to share these insights with their readers—and, as noted above, it would also be good to have a chapter on design. But that’s no big deal—you can find that in Regression and Other Stories, which is only $40 . . . ahhhh, to hell with it, I’m not here to sell you anything, you can just download our chapter 2 and chapter 16 right now.

Anyway, my point for today is that the Etzioni et al. book is crisp, modern, conceptual, and applied, and I recommend it for statistics for health data science students.

P.S. Zad sends along the above picture of Ace who is newly adopted and ready to learn!


  1. Michael Nelson says:

    Andrew, you (et al) do a great job in RAOS explaining both what null hypothesis testing is and why it’s a bad idea to use it the way it’s most often used. I especially liked how you gave examples of NHST “for reference” but then recast it in the context of model testing. I noticed there’s a chapter on NHST in this book, and since you didn’t bring it up as something you’d change, can I infer their treatment on the topic is similar? Or is (mis)teaching NHST still so engrained in stats texts that there’s no point calling them out for it? I know that a lot of courses/profs wouldn’t use a textbook that ignores NHST altogether, but I’d hope we could at least move toward a “teach the controversy” approach!

    • Andrew says:


      I can’t remember how Etzioni et al. handled hypothesis testing. I don’t think they did the full debunking like we did; it’s more that they didn’t emphasize it. Hypothesis testing is there, but they don’t treat it as foundational.

      • There is a larger issue here related to the disconnect between what many statisticians do regarding the methods the advocate and publish in statistics journals, and the kinds of methods that are actually taught to applied students, especially in the health sciences, and in regards to the kinds of methods applied in a consulting setting.

        For example, I know statisticians who may call themselves “Bayesian” and even publish Bayesian-based journal articles and books. Further, they will deride the use of NHST as Gelman does. However, when actually teaching health students or actually engaging in applied, consequential research, it’s all frequentist and p-values.

        Many health researchers (e.g. epidemiologists) will dismiss much of the “modern” thinking on methods research as simply fodder for statisticians to discuss among each other. Bayesian methods are seen as primarily for statisticians to impress each other with how fancy they can be, and the fact that these statisticians rarely utilize these approaches in “real-world” research is seen as evidence of the impracticality of these approaches.

        With this in mind, I am surprised that such a prominent Bayesian statistician such as Gelman is promoting a book that, as far as I can tell, is entirely frequentist and promotes NHST methods. To be clear, I don’t expect Andrew to ONLY use or advocate for Bayesian methods, but this blog post leaves me confused as to what he believes regarding the extent to which these more forward-thinking, Bayesian-type approaches should be taught to students in the health sciences, and also how much and how often they should be applied to “bread-and-butter” health research.

        • Andrew says:


          You raise important points. One way to consider these is to set aside the Etzioni et al. book and consider our recent book, Regression and Other Stories, which is entirely from a Bayesian perspective—in it, we talk about hypothesis testing only to explain why we don’t like the idea—but most of it still looks like classical statistics, running regressions and looking at estimates and uncertainty intervals.

          What’s going on here?

          A few things:

          1. I think the most important thing in statistical modeling is to understand the model you’re fitting. So our book is full of graphs of data and fitted models, explanations of the meanings of the coefficients, and so forth. Etzioni et al. seem to have a similar perspective.

          2. As a teacher, you have to reach the students where they’re at. And, as a textbook writer, you have to reach the teachers where they’re at. We wanted to write a regression book, not a Bayesian regression book. That is, we wanted to reach some chunk the many thousands of students every year who learn “regression,” not just the handful who want to learn “Bayesian regression.” So we convey Bayesian ideas but in a way that we think will be acceptable and understandable to teachers who are not coming from that perspective.

          3. As a community, statisticians and applied researchers continue to move in the Bayesian direction, but we’re not there yet. Even when we use Bayesian methods, I think we tend to use priors that are too weak (which, from another perspective, implies that they’re too strong, a point that we’ve discussed many times in this space). Regression and Other Stories is a step forward in that we use stan_glm which has weakly informative default priors. That’s a big step beyond no priors at all, but ultimately it’s not enough. Anyway, my point here is that it’s hard for our textbooks to go fully Bayesian when as researchers we don’t go fully Bayesian.

          4. Regarding real-world research: we use Bayesian methods all the time! But for awhile there have been well-credentialed ignoramuses who’ve not wanted to know this.

          To return to the Etzioni et al. book: I don’t think they’re promoting null hypothesis significance testing. I think they’re promoting standard statistical methods, but with a focus on modeling and estimation, not on testing. I recommend the book because I like a lot of what they say, and I think the book will be acceptable to many teachers of statistics in public health. Also I like that the book is short. Short counts for a lot. A teacher can assign the whole book to a class and still have time to cover a couple chapters of Regression and Other Stories or whatever.

          • Garnett says:

            Regarding #3: I’m starting to see journal and grant reviewers in my field with some familiarity with the Bayesian approach. I notice that the field, for better or worse, is in the process of codifying “acceptable” Bayesian analysis in much the same way that may have been done for classical approaches many years ago. For example, most reviewers in my field insist that there are objectively “correct” and “incorrect” priors.

  2. Alain says:

    I have a copy of RAOS. I liked it a lot, but there is no example of longitudinal analysis (repeated measures data) or analysis of clustered/correlated data. I guess it will be part of the forthcoming book about multi-level modeling?

  3. Dzhaughn says:

    Hey it’s only $28.44 for the Kindle version, a savings of 8 cents from the paperback. So I guess 8 cents is the expected future value on the used market with the buyer’s handwritten marginalia? Minus the future need to purchase a second copy after the fire…

  4. From the chapter abstract, this seems overly simplistic and positive “Instead of relying on theoretical understanding of the uncertainty of the sampling process and the properties of statistical estimators, these algorithms “bootstrap” or repeatedly resample from the observed data to quantify uncertainty. In a wide range of settings, this approach has been shown to be reliable, and it may be even more intuitive than classical methods.”

    That is given the theoretical understanding of the bootstrap needed to ensure reliable analyses. For instance see – What Teachers Should Know About the Bootstrap: Resampling in the Undergraduate Statistics Curriculum

    At least, in my career I have seen many statistician deliver highly unreliable analyses by using the bootstrap with little theoretical understanding of what it needs to do beyond blindly resampling the data.

    • It seems like unreliable sampling to judge the chapter from the chapter abstract. Unless, I suppose, the chapter abstract is a verbatim copy of the chapter.

    • Bob76 says:

      Thanks for the pointer to the article. I learned from it.

      If you are interested in more detail about the textbook’s chapter on the bootstrap, go to Amazon, find the entry for the book, click on look-inside, and search for resampling. I did so and I was able to see about 2/3 of the text of the chapter. Their discussion concludes with the statement “There is a right way—and plenty of wrong ways–to bootstrap in any setting.” Given that the chapter is 15 pages in an introductory text, you cannot expect too much in the way of caveats.

      PS The following text shows that I need to get a life. The article is also 15 pages long. Page 135 of the textbook (an all-text page of the bootstrap chapter) has about 480 words; page 375 of the article you point to (also all text) has about 1010 words. So the article is, roughly speaking, about twice as long as the chapter.

      • Thanks Bob76, I tried to make my comment reflect that the authors might have taken the high road using the words [just] from the chapter abstract, this _seems_ … but my concern was that they might have gone on the low road.

        Raghu pointed out that was not sufficient and I accept that.

        Your bringing out “There is a right way—and plenty of wrong ways–to bootstrap in any setting.” strongly suggests they took the high road. Thanks.

        Cost benefit analysis given the two states:
        They took the high road – authors or diligent readers get to respond – “they/we did take the high road” [you jerk].
        They took the low road – maybe just maybe they will rethink [most with low probability but some declaring me a _methodologic terrorist_]

Leave a Reply