Unlike Covid19, somethings don’t seem to spread easily and the role of simulation in statistical practice (and perhaps theory) may well be one of those.

In a recent comment, Andrew provided a link to an interview about the new book Regression and Other Stories by Aki Vehtari, Andrew Gelman, and Jennifer Hill. An interview that covered many of the aspects of the book, but the comments on the role of fake data simulation caught my interest the most.

In fact, I was surprised by the comments in that the recommended role of simulation seemed much more substantial than I would have expected from participating on this blog. For at least the last 10 years I have been promoting the use of simulation in teaching and statistical practice with seemingly little uptake from other statisticians. For instance my intro to Bayes seminar and some recent material here (downloadable HTML from google drive).

My sense was that those who eat, drink and dream in mathematics [edit] see simulation as awkward and tedious. But maybe that’s just me, but statisticians have published comments very similar to this [edit]. But Aki, Andrew and Jennifer seem to increasingly disagree.

For instance, at 29:30 in the interview there is about 3 minutes from Andrew that all of statistical theory is a kind of shortcut to fake data simulation and you don’t need to know any statistical theory as long as you are willing to do fake data simulation on everything. However, it is hard work to do fake data simulation well [building a credible fake world and how that is sampled from]. Soon after, Aki commented that it is only with fake data simulation that you have access to the truth in addition to data estimates. That to me is the most important aspect – you know the truth.

Also at 49:25 Jennifer disclosed that she changed her teaching recently to be based largely on fake data simulation and is finding that the students having to construct the fake world and understand how the analysis works there provides a better educational experience.

Now in a short email exchange Andrew did let me know that the role of simulation increased as they worked on the book and Jennifer let me know that there is simulation exercises in the causal inference topics.

I think the vocabulary they and others have developed (fake data, fake world, Bayesian reference set generated by sampling from the prior, etc.) will help more see why statistical theory is a kind of shortcut to simulation. I especially like this vocabulary and recently switched from fake universe to fake world in my own work.

However, when I initially tried using simulation in webinars and seminars, many did not seem to get it at all.

p.s. When I did this post I wanted to keep it short and mainly call attention to Aki, Andrew and Jennifer’s (to me increasing important) views on simulation. The topic is complicated, more so than I believe most people appreciate and I wanted to avoid a long complicated post. I anticipated doing many posts over the coming months and comments seem to support that.

However, Phil points out that I did not define what I meant by “fake data simulation” and I admittedly had assumed readers would be familiar with what Aki, Andrew and Jennifer meant by it (as well as myself). To _me_ it is simply drawing pseudo-random numbers from a probability model. The “fake data” label emphasizes its an abstraction that used abstractly to represent haphazardly varying observations and unknowns. This does not exclude any Monte-Carlo simulation but just emphasizes one way it could be profitably used.

For instance, in the simple bootstrap, real data is used but the re-sampling draws are fake (abstract) and are being used to represent possible future samples. So here the probability model is discrete with support only on the observations in hand with probabilities implicitly defined by the re-sampling rules. So there is a probability model and simulation is being done. (However I would call it a degenerate probability model given loss of flexibility in choices).

My sense was in today’s vocabulary, many simply did not realize that statistical thinking and modelling always took place in fake worlds (mathematically) and is transported to our reality to make sense of data we have in hand, that we trying learn from. They thought it was directly and literally about reality. That is, they were not trying to distinguish just learning what happened in the data (description) from what to make of it to guide action in the future (inference, prediction or causal).

That is what we need fake worlds (abstractions) for – to discern this. We can only see what would repeatedly happen in possible fake worlds. What happened in a data set is just a particular dead past and arid of insight for future possibilities that in statistical inference, we are primarily interested. Christian Hennig makes related points

here.

My own attempts to overcome that misconception have used metaphors: a shadow metaphor of seeing just shadows but needing to discern what cast them and an analytical chemistry metaphor of being able to spike known amounts of chemical in test tubes and repeatedly seeing what noisy measurements repeatedly occur. The discerned distribution of measurements given a known amount is then transported to assess unknown amounts in real samples. However, in many problems in statistical inference, known amounts cannot be spiked so we need fake worlds built with probability distributions to discern what would be repeatedly be observed, given known truths.

Until about 2000, these had to be discerned mathematically but in the last 10 years this can be done more and more conveniently using simulation. The relative advantage of the mathematical short cut is getting relatively less convenient and hence of increasing less value. I expect some push back here.

Most understand that statistics is hard. That, of course, was using those as Andrew put it mathematical shortcuts to fake data simulation. I think it would be a mistake to think what was hard was just getting those shortcuts. The answers are hard to fully make sense of.

I’ll close with speculating that in 10 years, statistical theory will be mostly about gaining a deep understanding of simulation as profitable abstraction of counter-factually repeatable phenomenon.

Maybe I’m misusing the terminology, but isn’t ‘fake data simulation’ so commonplace that it hardly needs mention? I do fake data simulation all the time. Not literally ‘all the time’, of course, but very commonly. I did it yesterday! And I’ll do it again today. I am currently working on something that requires Monte Carlo simulations (of future electricity prices, and electric load of specific customers; this is for commercial customers who are exposed to energy prices that vary by hour). I fit a model to previous data to get parameter estimates, which I use to generate possible futures. The whole point of the system is to be able to generate fake data!

As for Andrew: I think Andrew has long been a proponent of ‘fake data simulation’ or something close to it. For instance, for the first analysis that he gave me advice about, back in 1993, he recommended ‘posterior predictive checks’ to see how my model was performing. For a posterior predictive check, you use your model to generate a bunch of fake data — or rather, many bunches of fake data, each one a realization of what actual data could have looked like under the model — and then compare the fake data to the actual data (for instance, the maximum value in my data is 1127, but I did 1000 realizations and only three of them had maximum values that high, so I should look at aspects of my model that are preventing it from generating values that high).

I am clearly missing something, and I am not saying this in a passive-aggressive way in which I mean that I think _you_ are missing something. I am probably misunderstanding what “fake data simulation” means. Maybe you could explain: if I’m clueless about this, probably others are too.

Phil said,

“Maybe I’m misusing the terminology, but isn’t ‘fake data simulation’ so commonplace that it hardly needs mention? I do fake data simulation all the time. Not literally ‘all the time’, of course, but very commonly. … As for Andrew: I think Andrew has long been a proponent of ‘fake data simulation’ or something close to it….”

You seem to be misunderstanding the reality: Andrew and you are not typical of people doing statistics. Keith is talking about “Most people doing statistics today”.

Martha,

I’m genuinely unsure, but I don’t think so, I think it really is very common. For instance, in climate modeling, models are run to ‘tune’ them to approximately match historical observations. Each run of the model is a ‘fake data simulation’, at least in the plain-English interpretation of those words. Ditto for models to forecast the COVID outbreak. The Monte Carlo approach my colleagues and I are currently using for our electricity work is well within the mainstream of similar work, in that field and other fields, and that’s ‘fake data simulation’ too. Although it’s sort of true that for the past few years of consulting I’ve been working in a bubble — I only interact with about six colleagues and about the same number of clients, and I don’t keep up with the broader literature like I used to — it’s certainly true that in the areas I work in, Monte Carlo sampling and other ‘fake data simulation’ is extremely common. It’s not just me.

I’m guessing that when Keith says “fake data simulation” he means something more or different. Or maybe there’s something else I’m not understanding.

Take a look at the field of system dynamics. It’s built around the notion of creating causal ODE models of real-world organizational, social, environmental, … problems, testing them, and then using them as test beds for trying solutions prior to implementing a change in the real world. “Creating … models” includes fitting them by some combination of picking parameter values from prior research or insight, tuning them manually to produce the desired output behavior over time, using an optimizer of some sort to fit the output, doing a full Bayesian fit (rarer, AFAICT), …. http://vensim.com/vensim-video-library/ has some examples shown via video.

I think Martha is right. In the field I work in, doing operational analysis for hospitals, I am the only person I’ve known in my professional life who makes regular use of simulation. In my undergraduate degree, I think I took twelve modules (of a total of twenty-four) that were about statistics – simulation I think was mentioned in only two.

Of course this is just anecdote but I definitely appreciate all efforts to get more people simulating!

Somewhere I worked in 2010, of the 12 statisticians, 3 with Phd’s there were only 5 of us that claimed they could do simulations. In working with them, I discovered 2 of the 5 actually could not when I reviewed their work.

It changing and I believe it depends on where you did you training.

I think they mean something more along the lines of parameter recovery or model mimicry studies – examining what your statistical model does when you know the true process generating the data

Phil: Sorry for tardiness in answering this specifically.

My sense was that most statisticians (including past Andrew) thought of simulation as something to (begrudgingly) use to get approximate answers when no one could do it analytically. Or just for those who could not do or understand the math. I have encountered that view many times in my career. Starting early in my career, my advisor David Andrews actually told me a professional statistician should not have to stoop to do simulation, when he found out I was using simulation in power assessments. At Oxford in 2006, I was informed my examiners would unlikely pass me unless I put some hard math in my thesis. I did, I thought it was ornamental, but they passed me. Unfortunately these are things that good surveys are seldom done on – so who really knows the distribution of views here?

However, it would explain ‘posterior predictive checks’ as they usually cannot be done analytically. And at least some of Andrew’s colleagues thought using simulation from the joint prior to get a sense of the implied prior on an interest parameter was something they did not think was a good idea. At least, for them.

Now, the views expressed in the linked interview were different than getting approximate answers and checks. Simulation is now being realized as an alternative to statistical theory (albeit a very inefficient one), as the only process where we “concretely” know the truth and a way students learn much more about the theory and understand it more purposefully using this inefficient way to learn theory. And the inefficiency is exponentially? decreasing.

So, I am seeing that view of simulation as not being very common place and profitable to be aware of.

I understand “fake data simulation” to mean “simulate actual data using fake data”, where “fake data” means data that were not generated rather than acquired by an experiment. Well, the whole process of simulating the data could be thought of as an experiment, but that’s a different kind of experiment (is it a meta experiment?)

To me, it’s a way to learn about the statistical properties of data sets that may not be ideal, ones that may not fit the mathematical properties of distributions that have been worked out over the years.

It’s also a way to come face to face with what can happen when you have small, possibly unusual data sets with few if any replications.

Three thoughts on simulation, statistical training and empirical practice, ranging from tangential to overlapping with those of Keith, with #3 probably the most relevant to the discussion at hand:

1. Looking over my more methods-y research papers, there is a strong positive correlation between how much simulation is included in the manuscript and how hard it is to publish. In one case I had to actually strip the simulation part out, even though it was almost certainly the most convincing part of the paper. It showed that the patterns people were seeing were spuriously generated by the interaction of sampling design and statistical model, even without any “real” effect in the data.

2. The concept of a “sampling distribution” did not really make sense to me until I did my first simulation, which is to say I had a definition and basic idea but no deep understanding of the concept. Simulation also helped deepen my intuition for associated concepts like statistical inference and residual variation (the role of the “error term”) in ways that no thought experiment or mathematical representation ever did for me.

3. Simulation forces you to take BOTH the theoretical model AND the statistical model seriously at the same time. Andrew talks a lot about sampling and measurement and how the importance of the topics are generally under-valued in the field. I think simulation makes clear a second, related, weakness in our training: understanding the relationship between model and data structure. If you are simulating an analysis of a complex dataset, you have to be able to re-build that complexity in a fake world (a DGP that is sampled from). And that re-building has to take into account both the underlying data generating process in the world (the economic, sociological, demographic, physical, chemical process) AND the sampling design from which the observations are generated. (n.b. Maybe this is related to the general problem in scientific communication of confusing the substantive theoretical model of the world with the statistical model that must represent the structure of the data.) This process almost always turns out to be slightly more complicated than I thought it would be when I start building a simulation, and I almost always learn something about both the data and the theoretical model in the process.

1. Sad. I hope this will change. But probably most statistics textbooks need to change first.

2. Another good reason for statistics textbooks books to include simulation.

3. Still another good reason for statistics textbooks to include simulation.

Three important points, on the first point, Ryan Tibshirani was arranging a panel to deal with journals on that topic – don’t know if anything came of it.

I can’t speak about people applying statistics in general, but PhD-economists are well aware of the (useful and important) points raised by Keith. Reasons include (i) all economists are taught to think in models, which are fake worlds, (ii) some common econometric methods are explicitly simulation based, such as simulated method of moments or simulated maximum likelihood. Moreover, fake data simulation is done by default in structural modeling.

Great discussion Keith. In fact, just as “all of statistical theory is a kind of shortcut to fake data simulation and you don’t need to know any statistical theory as long as you are willing to do fake data simulation on everything”, it seems that most model evaluation is a kind of proxy for checking out-of-sample predictive fit (or whatever utility function more generally). I wonder about re-building pedagogy around those two core planks…

> re-building pedagogy around those two core planks…

That’s what I want to focus on!

(I think the mistake is trying to do the usual pedagogy/course material with simulation. Believe computer algebra failed this way in statistics teaching.

In several applied stats classes that I had taken, most students didn’t like simulated examples and asked for real examples instead. I feel that it was mainly due to the priorities that students have. To some of them there might not be a point to using simulations if they do not want to understand the ins-and-outs of their models and statistical choices, but rather follow a strict and rigid guideline on how to carry out X statistical method and understand the output.

Another factor I think contributed to students not liking simulations in the classes I had taken is that the professors didn’t really spend time at the beginning of the class discussing the role of simulation in learning and why it matters, or how simulating data actually worked.

Also in a lot of R tutorials I have read, the writers hadn’t actually specified that a block of code was for simulating data and not a part of the method they were showcasing, which can make it very daunting to those not as familiar with statistical programming.

Since I became more interested in understanding why statistical models work the way they do and the implications of statistical choices, I have been wanting to learn how to simulate data. Now I can easily simulate single variables in a data-frame with base R and transform them somewhat to simulate some basic designs, but simulating multiple variables with complex covariance structures, multilevel designs, interaction effects, missing data patterns conditional on covariates, etc…, seems very daunting to me. I purchased the book “Simulation for Data Science with R” which I am currently reading to gain a better understanding. Do you have any good recommendations for building a strong intuition in simulating data in R?

“Another factor I think contributed to students not liking simulations in the classes I had taken is that the professors didn’t really spend time at the beginning of the class discussing the role of simulation in learning and why it matters, or how simulating data actually worked.”

Good point!

phdummy,

“fake data simulation” doesn’t refer to “simulated examples”, or at least doesn’t have to. It’s a common technique in the real world, i.e. real ‘examples’ use fake data simulation. For instance, I have a time series model for future electricity prices. I use this to generate lots of potential futures: the model is used to help make decisions about whether to buy energy in advance (basically on a futures market) in order to avoid the risk of big increases in the future, and, if so, how much to use. To make sure the model provides decent results, we do fake data simulation: feed in the input data that would have been available several years ago and create many possible scenarios for what could have happened from 2018 to the present. The idea is to make sure that those scenarios include some that ‘look like’ what actually happened, as a check to make sure the model performs OK. I think this is what Keith means by ‘fake data simulation’. But this is a real-world example, there’s noting ‘simulated’ about the example.

Please see p.s. added in post.

What is confusing is why this is not a routine practice in model checking. I understand why this was so back in the era of relatively expensive computing power. But my goodness, it is just such a cheap thing to do these days save for but the most complicated models (and even then I would probably take that as evidence for simplifying the model).

I mean, if you have a model that you’re intending to use for decision making purposes…how can you jump to the decision making without ever having evaluated what the model would have outputted given past data (and how optimal of a decision that would have led to evaluated by known future events)? Is it just assumed that because the model was trained on past data that at all time steps within the training set it would have led to satisfactory inferences (and decisions)?

Maybe my attitude is so extreme on this because I grew up entirely in an era of relatively cheap computing power? But even still I just don’t get the resistance; or more to the point I don’t understand why this is not a natural thing for anyone modeling anything to do…

> don’t get the resistance; or more to the point I don’t understand why this is not a natural thing for anyone modeling anything to do…

As my son told when he was 17 and I made him look at this stuff his comment was, yeah I get it but shouldn’t there be a formula that does a better job. So some of it might be an overvaluing of analytic symbolic approaches compared to simulation. Something similar also happens in mathematics where symbolic proofs are overvalued to diagrammatical proofs even when they are fully rigorous.

Great point. I’ll have to think about how to incorporate that approach into my own work.

> how simulating data actually worked.

And it is so easy to explain using digits of Pi to get Uniform(0,1) pseudo-random numbers and use them to pseudo-random numbers from any distribution you choose or even define on your own using rejection sampling. Inefficient but elegantly transparent. (The efficient methods used require a lot of math to understand.)

If you download the link I give to the HTML you can open and see and here that being explained.

Thanks. I will take a look!

Hi phdummy, my advice is to approach the complexity incrementally. You are right that there are many tricks of the trade in building up advanced models. But the universal key to not getting lost in all this is to think *generatively*. Your model is a story about how the data were generated. Simulations are a way to ‘test drive’ the story where you happen to know the “true” parameter values (because you made them up).

So, you can have a complex joint distribution of all your variables, both observed and unobserved pi[x1,x2,…,xn]. But using Bayes, you can factorize this joint. The simplest example: pi[x1,x2] = p[x1|x2]p[x2]. So, on the RHS, we have a generative story – first you have values of x2 with some distribution, and then conditional on those x2 values, we have x1. Build up the scientific reasoning incrementally, and also play around with things, adding assumptions, relaxing assumptions, etc.

Nicely put.

Take a look at the field of system dynamics. It’s built around the notion of creating causal ODE models of real-world organizational, social, environmental, … problems, testing them, and then using them as test beds for trying solutions prior to implementing a change in the real world. “Creating … models” includes fitting them by some combination of picking parameter values from prior research or insight, tuning them manually to produce the desired output behavior over time, using an optimizer of some sort to fit the output, doing a full Bayesian fit (rarer, AFAICT), ….

Tom Fiddaman has a model library at https://metasd.com/model-library/, if you want to see a collection of various system dynamics models. And while many system dynamics models are implemented in Vensim, STELLA, or PowerSim, you can use GNU MCSim, Stan, or pretty much any ODE solver. The commercial and some of the free alternatives do provide a GUI, so you draw a diagram of the “system,” and it creates some of the equations.

Someone recently pointed out to me that “fake data simulation” is not a good term to use, but rather just use “simulated data”. Indeed, in the early days when I was using Andrew’s terminology of fake-data simulation (influenced by the Gelman and Hill book, in 2007 or so?), someone did think that I was actively faking data, just because the phrase “fake data” was there. I now use the phrase simulated data.

I made my first serious contact with statistics in 2000 or so, in a course taught by Mike Broe, who was a phonetician at the time at Ohio State, and he always used simulation from the get go (with Mathematica or some such software). It made everything very transparent, and all my stats courses today revolve around simulation. Influenced by Andrew+Jennifer and Aki’s work, our research also heavily depends on simulation-based validation of models. Their work has really been transformative for my lab. Paul Buerkner also deserves a lot of credit for making the entry into the workflow much quicker.

Here are my lecture notes on introductory stats (a book under contract with CRC Press), if anyone is interested: https://vasishth.github.io/Freq_CogSci/

Comments/criticism/complaints can be made by opening an issue here: https://github.com/vasishth/Freq_CogSci/issues

Readers on this blog will be appalled that the book is about frequentist statistics, but frequentist stats also needs taught.

We also teach this material (and of Bayesian stats using Stan+brms) in a one-week summer school taught annually at Potsdam: https://vasishth.github.io/smlp/.

I’ve used “synthetic data” when trying to sound more honorable and professional and “fake data” when trying to make the idea more memorable. I have no idea if that worked.

Keith, the google drive link does not lead to human readable content. What were you linking to? Could you repost that?

File needs to be downloaded to be viewed – too big for google docs to do that for you (there is animations and narration in it).

(I’ll replace it with something smaller when I get a chance.)

What are some good references that people can use for teaching and/or self-learning about simulation studies?

Most texts I’ve found tend to focus on fairly simple/toy examples that all rely on fairly strict adherence to multivariate normality assumptions which may not be the most applicable across fields of study. Other times there is a presentation of mathematical models but no examples of the application in a programming language.

I definitely would like to learn/know more about simulation studies, but it seems more difficult outside of academia to access the resources necessary to access high quality training on this topic.

I’ve written 2 books on simulation modeling (sorry to be self-recommending, to use Tyler Cowen’s favorite term). I wholeheartedly support the use of “fake data simulation” in statistics, but I would point out that this is just one aspect of the value of simulation. For example, there are many situations where data is plentiful and parametric distributions fit that data fairly well (e.g. think of financial data, such as stock price changes). When modeling something (such as the behavior of a portfolio over time) the actual data and the fitted distribution will perform somewhat differently, particularly in the tails of the distribution. It is not obvious which is “better” or which will yield more accurate predictions. In fact, that is what attracts me to many examples where simulation can be used: an imperfect choice must be made as to whether to be bound by the historical data or make the leap of faith to base decisions on a model (which necessarily is “wrong” in the sense of deviating from observed facts).

Hi Dale,

It’d be helpful/useful to provide full references if you’re going to self-recommending (which is completely fine).

Dale – please ;-)

https://www.routledge.com/Practical-Spreadsheet-Modeling-Using-Risk/Lehman-Groenendaal/p/book/9780367173869

spoiler alert: it is Excel-based

Nice double entendre with the spoiler alert ;). Do you have any plans to publish a newer edition or different version that uses something other than MS Excel?

Dale:

When I have tried to do tutorials in Excel, I found they non-programming implementation just got very awkward (each draw of a 1000 needs 1000 rows and 1000 really small for simulation these days) and using programming (Basic or something) everything gets hidden. In R all the steps can be seen and played with by simple modifying of setting and code.

How do you deal with the simulation steps?

I don’t use native Excel – this latest book uses @Risk and my earlier book used ModelRisk. While you can force Excel to do many iterations, it is awkward and limited. The Monte Carlo add-ins are really quite powerful. I used to use Crystal Ball (another Excel add-in) but stopped after Oracle acquired them. There is one more, Risk Solver Platform, which really excels (unintended pun) at optimization, but is currently more limited in some other dimensions.

Given a basic grasp – I would suggest Art Owen’s Monte Carlo theory, methods and examples https://statweb.stanford.edu/~owen/mc/

He may also have some tutorials.

Simulation is certainly a useful technique, but I’m not sure that it is universally applicable. A year ago I spent a year (not my day job) improving the model described on https://www.ratingscentral.com/HowItWorks.php by adding a Poisson jump process to the model. I spent some time generating fake data, but this wasn’t useful. The approach that was useful was to process real data using several different models.

However generating fake data like enough to the real data one, should be better able to discern the model’s different repeated properties?

Part of the fake data simulation challenge is specifying an appropriate probability. With that specification, the model is taken as true beyond doubt in the simulations.

I haven’t seen anyone mention multiple comparisons, but that seems like a big reason to do simulation / use fake data.

If I’m not sure what model to fit, I can keep running different models against my real, possibly expensive-to-collect data. Then a little voice starts whispering, “Don’t use the data twice.”

Or I can use fake-data simulation to create a new data set from the same model for each new experiment. If I’ve done it well, each data set should be “identical” in its statistical characteristics, letting me largely ignore multiple comparisons issues.

When I get a model that performs well against the fake data, I can apply it to the real data to draw inferences.

Right?

If you have a model that produces good simulated data, then it’s a good model already. You don’t need to fit more models to its output and then see if they produce good fits.

The whole “don’t use the data twice” concept is really a shortcut to say “don’t condition incorrectly”.

p(Something | Data & Data) = p(Something | Data)

If you’re doing your conditioning correctly, the two are equal, putting Data in twice doesn’t change anything.

But if you do p(Something | Data1 & Data2) and pretend the Data2 is a new separate measurement when in fact it’s just a second copy of the first measurement… then you wind up with the wrong answer.

This is where the “don’t use the data twice” comes from: from pretending you have more information than you really do.

Fitting multiple models to the same data and considering all of them is not using the data multiple times… It’s using the data once to fit multiple models.

Since the computing requirements of taking say a 350 dimensional modeling problem, and adding in 13 different variants of that model to a big mixture, and converting your problem to a single 4500+ dimensional model is prohibitive, it can be much better to fit each of the 13 models separately, in a reasonable time, and then compare across them in some other way. But if you did it as the single mixture model, it would obviously be conditioning the one meta-model on the data in a correct way using the data once.

The approximate version isn’t “using the data 13 times” it’s using the data once in an approximate computation.

There’s nothing wrong with testing your models against simulated data, except that if the simulator isn’t a good model of the reality, you’ll be choosing your model on the basis of it fitting your simulator not on the basis of it fitting your data.

Here is a great paper on multiple comparisons using simulation:

Von der Malsburg, T., & Angele, B. (2017). False positives and other statistical errors in standard analyses of eye movements in reading. Journal of memory and language, 94, 119-133.

This paper should have become a classic by now in eyetracking (reading) research. Maybe there’s still time. What we do in eyetracking is just keep looking looking looking to see which of many dependent measures shows the effect we want. Eventually we find what we need. This paper blows that whole thing up.

I think “fake” is absolutely the wrong word here, especially since 2016. And when “fake” and “simulation” are used together, things get very fuzzy. What would “non-fake data simulation” be?

Simulation and theory really go hand in hand. Simulation clarifies the understanding and points to things we don’t understand so well (behavior in the tails); theory helps understand the simulation results and sharpen the ability of simulation to move into those sketchier areas. It’s worth noting in these sorts of discussions that Student/Gosset developed his Student-t distribution through the use of laborious pre-computer simulations.

I agree that simulation tends to be short-changed as a teaching instrument. Maybe it will get more accepted as we see increased displays of simulation in the wider world, such as the simulated hurricane storm tracks that you now see in the news, or the simulated election outcomes on the Economist web site.

I recently added a whole new part to the

Stan User’s Guideon Posterior Inference and Model Checking. It goes over both the theory and Stan code for simulation-based calibration, prior and posterior predictive checks, cross-validation, and even the bootstrap. These are all based on simulation.Thanks.

Two thoughts:

1. Stephen Senn quips that “simulation is mathematics by other means”. This is sometimes true, but I think misleading in general, depending on what the simulation is for. In probability it’s reasonable to think of simulation as fundamental, and things like densities as really just a means of reasoning concisely about simulations.

2. Part of the need for this terminology is the ambiguity in the term “model”. It can refer to a single probability distribution, or a parameterized family of them. The sample space can represent “one world” (e.g. ω∈Ω indexes units) or “many worlds” (e.g. ω∈Ω is mutually exclusive possible states of a system). I wish statisticians had let “model” to refer to a

singleprobability distribution by default.Leon:

Mathematics is simulation by other means. It is simulation that is fundamental, not mathematical analysis.

Andrew,

On first reading, I interpreted your comment as saying something different from what I think was your intent. So if I may rephrase to (I hope) clarify what I think is your intent:

“Simulation is fundamental. Mathematical analysis is just one means of simulation.”

Martha:

Yes, well put.

Missed these comments.

I was not yet going that far given (statistical) simulation only gives specific results. So for instance, parameters between 0 and 1, one has to go with just a set of points with so many digits.

However, thinking of simulation more generally as performing experiments on abstractions (diagrams or symbols) to learn about them, that was Peirce’s definition of mathematics. So then, I would agree.

I’ve been thinking about this. I think the distinction is Algebra vs Analysis.

Algebra is about the use of rules and symbols and logic and language. If you approach a problem by writing out a computer program to calculate the answer… you’re doing Algebraic thinking.

Analysis is about approximation of one thing by another, and finding bounds on the errors between things. If you approach a problem by finding out that it can’t be smaller than x or bigger than y you’re doing Analytic thinking.

In my (probably very controversial) opinion, Algebra is more fundamental. For example if in the early days of the development of mathematical thought, aliens had come and given us access to computers 100 Trillion Trillion times faster and higher memory capacity than they are now, we’d not have developed analysis, we’d solve all math problems by the trivial methods that Numerical analysts tell you not to use. The key thing would be the code, the symbols, and whether the relations between them were correct.

On the other hand, Analysis is certainly plenty useful. My favorite form of it though is nonstandard analysis, in which Analysis *becomes* Algebra.

So, when Andrew says “simulation is fundamental” I think in the end, he’s agreeing with me that Algebra is fundamental, the relationship between the things. Random simulation or deterministic simulation, or whatever, they all boil down to in essence computer code, which all boils down to in essence lambda calculus or turing machines or whatever your favorite primitive is. It’s the specification of what you think is going on, or at least what you think you should do to predict.

The “mathematics” = “Analysis” is the shortcut answer. It exists because we don’t have that 100 Trillion Trillion times faster computer, and we can’t process terabytes of symbols through theorem proving programs and etc.