Skip to content

Is there any scientific evidence that humans don’t like uncertainty?

Avram Altaras asks:

Is there any scientific evidence that humans don’t like uncertainty? I think I saw that in one of Gigerenzer’s articles, and the guest lecturer talked about it last week. It’s def conventional wisdom but I’m having difficulty accepting it.

I replied that I’m not sure. I guess a statement such as “humans don’t like uncertainty” would have to be stated more specifically before it could be evaluated. So I assume this has been done, but I don’t know the literature.

Altaras continued:

There is the test that asks “do you want $100 with certainty or $50 if a coin comes up heads and $150 otherwise”, which only proves that humans don’t like to be exposed to uncertainty without compensation. Others invoke cavemen, dinosaurs, and evolution to make the point (almost any point.)

I tossed the question of to Josh “Don’t call him ‘hot hand'” Miller, who replied:

Unless I misunderstand what is meant by “like” and “uncertainty,” I have a boring opinion: it depends.

People sometimes dislike certainty, e.g. they hate spoilers.

People often seek out uncertainty, e.g. suspense and surprise.

Some people find uncertainty to be convenient, e.g. willful ignorance and plausible deniability.

We could probably go on…

On the other hand, when I am making a decision, I prefer to know the consequences of my actions, rather than for them remain uncertain. At the very least it helps for planning.

Can any of you help out here?


  1. I thought that ‘laundering uncertainty’ was truly conception in genius. Andrew, you coined it right?

    • Andrew says:


      Here’s what I wrote in 2014:

      One of our ongoing themes when discussing scientific ethics is the central role of statistics in recognizing and communicating uncertainty. Unfortunately, statistics—and the scientific process more generally—often seems to be used more as a way of laundering uncertainty, processing data until researchers and consumers of research can feel safe acting as if various scientific hypotheses are unquestionably true.

      I don’t know if others used this term in that way before that.

  2. Alex Godofsky says:

    If instead of “uncertainty” you say “risk” (and I don’t think the original question is being precise enough to distinguish the two), the intuition is straightforward:

    1) If you have individuals who just always like risk, then it is easy to take all of their money, because risk can be manufactured cheaply in limitless quantities. (See also: lotteries, casinos.)
    2) If you have individuals who like risk *up to a point*, and after that eventually start to dislike additional, marginal risk, then you will see people generally making risk-averse decisions.

  3. Carlos Ungil says:

    Is there any scientific evidence that humans don’t like gambling?

    Some don’t like it, and don’t gamble, some like it even less, and forbid gambling. But those who like gambling are also humans!

  4. Both ‘gambling’ and ‘uncertainty’ have multiple connotations. So examining the context is key. I’m more interested in the ways in which ‘uncertainty’ is used to inform different audiences about biomedical enterprises. I think that it is fascinating especially as the term ‘uncertainty’ is played by different stakeholders.

  5. Ethan Bolker says:

    In response to Miller’s boring answer:

    I often tell my students that every interesting question has the same answer: “it depends”, meaning (in many contexts) that if you can easily calculate or look up the answer the question isn’t interesting.

    • jim says:

      “if you can easily calculate or look up the answer the question isn’t interesting.”

      It also means the answer isn’t worth anything. That’s why doctors fight so hard against automated diagnoses.

  6. Anon says:

    this paper might be relevant: Tversky, A., & Shafir, E. (1992). The disjunction effect in choice under uncertainty. Psychological science, 3(5), 305-310.

  7. Paul Hayes says:

    Does the human response to the train illusion (#9 here) count as a “dislike” of uncertainty?

  8. James says:

    And in the example of $50/$150 versus $100, a diminishing marginal utility of cash can fully explain the phenomenon of people opting for certainty. I’m certainly not saying that everyone is doing a full expected utility analysis for every decision they make, but let’s not pretend that expected utility theory doesn’t have an explanation.

    • James says:

      minor edit: the second half of the last sentence was more hostile than intended, please disregard the tone

    • Bi says:

      Right. You beat me to the punch. It’s a point that I have made in one way or another in my honors college course on decision theory (taught to nonscience Freshmen and Sophomores).

      • Bill Jefferys says:

        (Somehow my name got garbled above).

        Generally, I prefer to discuss issues like this when the amounts of money are more substantial, because the nonlinearity of the utility/loss functions is much more pronounced. In my classes, where the students are often looking at student debts in the $10K-$100K range, I would rather pose the choice as: Would you rather have $100,000 for certain, or a 50/50 chance at either $50,000 or $150,000.

        Put this way, the potential loss of utility of $50,000 relative to $100,000 is considerably greater than the potential gain of $50,000. People would much rather not lose the $50,000 than have an even chance at an extra $50,000. So the problem is a lot easier to discuss.

        It also leads to a class discussion of insurance. The reason insurance works and that people are willing to make an “unfair” (to them) bet with the insurance company that they will lose the house but be compensated by the insurance company is because the insurance company, with huge assets, is essentially working in the linear part of its loss function, while the individual is working in a highly nonlinear part of his or her loss function. So the bet advantages both parties (from a loss/utility point of view).

        • Martha (Smith) says:

          Bill said, “The reason insurance works and that people are willing to make an “unfair” (to them) bet with the insurance company that they will lose the house but be compensated by the insurance company is because the insurance company, with huge assets, is essentially working in the linear part of its loss function, while the individual is working in a highly nonlinear part of his or her loss function. So the bet advantages both parties (from a loss/utility point of view).”


    • Andrew says:


      A diminishing marginal utility of cash does not explain risk aversion; see section 5 of this article, and we’ve discussed this topic many times over the years, going back at least to 2005.

      • Bill Jefferys says:


        I am not very convinced by the class example you gave in that article. The reason is that it is really not very clear that most people’s utility functions are significantly nonlinear for small amounts. The extrapolation, even done in stages, from small amounts like $1 to large amounts like $1,000,000 probably can’t even be done consistently as you do it in class, it seems to me.

        Maybe I’ve misunderstood the point of your experiment, but when I did stuff like this in class I always used only large amounts for elicitation of utility functions, amounts comparable to significant student debts, and then relied on interpolation for small amounts.

        Any thoughts?

        • Andrew says:


          Of course people’s utility functions are not nonlinear in any meaningful way for these small numbers. That’s the point! Students display risk aversion—they don’t like these gambles—but this has nothing to do with a utility function for money, and it has everything to do with people preferring the sure thing, not liking the risk, etc. The entire point of this exercise is to explain that risk aversion (in some settings) is a real thing that has nothing to do with the utility function for money. It’s a demonstration that you can’t reasonably explain preferences regarding uncertainty using a utility function.

        • somebody says:

          > The extrapolation, even done in stages, from small amounts like $1 to large amounts like $1,000,000 probably can’t even be done consistently as you do it in class, it seems to me.

          This is the point of the experiment. Expected utility can be used as a decision making framework that guarantees a set of desirable properties. It cannot be a universal theory for explaining uncertainty aversion /because/, as you point out, it cannot be done consistency.

          You may say it’s a strawman, but it’s pretty standard in microeconomics theory to claim that risk aversion IS utility curvature, and that rational decision making IS utility maximization.

  9. Jeff Helzner says:

    Well, there’s a pretty large literature on uncertainty aversion, e.g., as a way to explain typical responses to Ellsberg style problems. Some of it is interesting, but I think it misses Ellsberg’s main point, which seemed more concerned with challenging the normative status of expected utility theory than with identifying psychological effects. Anyway, I digress. Just google up “Uncertainty Aversion” and you’ll find a billion or so academic papers on the topic.

  10. gec says:

    Taking a different perspective, Dan Berlyne, who unfortunately died quite young, followed work by people like Leonard Meyer to study the information-theoretic correlates of aesthetics. In other words, how is “liking”/preference related to predictability in context? It is probably not surprising that, in general, there is a “sweet spot” where too much uncertainty/unpredictability is just noise and too much uncertainty/predictability is boring.

    So if the question is about “liking”, people tend to “like” a moderate degree of uncertainty. The rub is that different people have different internal models about the world and therefore experience different degrees of certainty in different contexts.

  11. Mark Samuel Tuttle says:

    In graduate school I ate a lot of Chinese food. It was inexpensive, hot, tasty, satisfying and the service was very friendly after they got to know me. At that point in my life I wanted “certainty”, at least for dinner. (The rest of my life was uncertain, to say the least.)

    At other times, I craved novelty – new challenges, new experiences, thus, I wanted uncertainty, or at least the certainty of uncertainty.

    One can think of a host of evolutionary reasons for this.

    One of my favorite examples is why men do not want to ask for directions, etc.

    • Mark Samuel Tuttle says:

      Yet more on “uncertainty” …
      On Mon, Feb 3, 2020, 8:39 AM Andrea Bajcsy wrote:
      Dear all,

      Today (February 3), we are welcoming Prof. Jeannette Bohg for the DREAM/CPAR Seminar.

      Please see below for details.
      Who: Prof. Jeannette Bohg
      Where: 250 Sutardja Dai Hall
      When: 4-5pm, Monday, February 3, 2020

      Title: Acceptance over Ignorance – How to embrace uncertainty in robotic manipulation.

      My research is driven by the puzzle of why humans can effortlessly manipulate any kind of object while it is so hard to reproduce this skill on a robot. Humans can easily cope with uncertainty in perceiving the environment and in the effect of manipulation actions. One hypothesis is that humans are exceptionally accurate in perceiving and predicting how their environment will evolve. Therefore, improving the accuracy of perception and prediction is one way forward. In this talk, I would like to advocate for a different view on this problem: What if we will never reach perfect accuracy? If we accept that premise, then an important focus towards more robust robotic manipulation is to develop methods that can cope with a base level of uncertainty and unexpected events.

      In this talk, I will present three approaches that embrace uncertainty in robotic manipulation. First, I present an approach where one robot scaffolds the learning of another robot by optimally placing physical fixtures in the environment. When optimally placed, these fixtures funnel uncertainty and thereby dramatically increase learning speed of the manipulation task. Second, I present an approach that goes beyond a single manipulation tasks by performing task and motion planning. We propose to combine a logic planner with a trajectory optimiser, where the output is a sequence of Cartesian frames that are defined relative to an object. This object-centric approach has the advantage that the plan remains valid even if the environment changes in an unforeseen way. Third, I present an approach for deformable object manipulation which is a challenging task due to a high-dimensional state space and complex dynamics. Despite large degrees of uncertainty, the system is robust thanks to a continuously re-planning model-predictive control approach.

      Bio: Jeannette Bohg is an Assistant Professor of Computer Science at Stanford University. She was a group leader at the Autonomous Motion Department (AMD) of the MPI for Intelligent Systems until September 2017. Before joining AMD in January 2012, Jeannette Bohg was a PhD student at the Division of Robotics, Perception and Learning (RPL) at KTH in Stockholm. In her thesis, she proposed novel methods towards multi-modal scene understanding for robotic grasping. She also studied at Chalmers in Gothenburg and at the Technical University in Dresden where she received her Master in Art and Technology and her Diploma in Computer Science, respectively. Her research focuses on perception and learning for autonomous robotic manipulation and grasping. She is specifically interesting in developing methods that are goal-directed, real-time and multi-modal such that they can provide meaningful feedback for execution and learning. Jeannette Bohg has received several awards, most notably the 2019 IEEE International Conference on Robotics and Automation (ICRA) Best Paper Award, the 2019 IEEE Robotics and Automation Society Early Career Award and the 2017 IEEE Robotics and Automation Letters (RA-L) Best Paper Award.


      DREAM/CPAR Info

      All talks are livecast:
      Past talks can be found here:

      All the best,

  12. Similar to Josh’s answer, Chuck Manski (economist) summarizes some of the arguments for why people dislike or can’t tolerate uncertainty in a recent article called ‘The Lure of Incredible Certitude’. He cites Bar Anan, Wilson, and Gilbert (2009) saying that “Uncertainty has both an informational component (a deficit in knowledge) and a subjective component (a feeling of not knowing)”, noting that the subjective component doesn’t have an interpretation in expected utility theory. Bar Anan et al don’t fully buy the view that feelings about uncertainty are generally aversive so they propose in contrast, “an uncertainty intensification hypothesis, whereby uncertainty makes unpleasant events more unpleasant (as prevailing theories suggest) but also makes pleasant events more pleasant (contrary to what prevailing theories suggest).” (Section 3)

    Ultimately it seems like the empirical foundation for predicting feelings about uncertainty (not unlike the empirical foundation for a lot of affect research in psych), is a bit iffy. Sometimes people like uncertainty sometimes they don’t when you use the term uncertainty to encompass situations as different as gambling and being told movie endings.

  13. Psyoskeptic says:

    There is plenty of animal research indicating that information that predicts reward is, in itself, rewarding.

    For humans imagine an elevator with no floor indicators versus one with floor indicators. Which one would you rather travel in?

  14. Peter says:

    There’s an extensive literature on self-reported distaste for uncertainty, mostly grown out of the literature on anxiety disorders. For example, in this paper — — the authors asked ~1000 people a set of 12 questions intended to capture how much they were unwilling to tolerate uncertainty (e.g., “It frustrates me not having all the information I need”, “I must get away from all uncertain situations”, etc.). Using a Likert scale ranging from 1 (“Not at all characteristic of me”) to 5 (“Entirely characteristic of me”), the average item score was 2.15 (SD=0.79), near the center of the likert scale.

    But, the authors found that scores on their measure were positively correlated with anxiety and depression (this is a reliable finding which I’ve seen in my own unpublished data as well). In other words, I think that this question — “Is there any scientific evidence that humans don’t like uncertainty?” — is almost certainly misspecified. Some humans do and some don’t, and the ones that can’t handle uncertainty tend to be somewhat less happy.

    • Curious says:


      While people with relatively higher levels of anxiety tend to be less happy, they also tend to be better at critical thinking and better editors and better at organization. The issue of importance is what methods are used to resolve the cognitive dissonance relative to the need for certainty.

      1. Pretend it doesn’t exist — Denial
      2. Quantify it — Understand it
      3. Focus on the causal process — Understand it more fully

      Each method resolves the dissonance and thus the anxiety, at least temporarily. However, some are more useful than others.

      • Peter says:

        Definitely. I didn’t intend to disparage anxious people — I’m (often) one of them — just wanted to add some additional layers of complexity to the question from the title of this post. I think the answer to this question of whether humans “don’t like uncertainty” is going to be that “it varies”, and not only that, it covaries with factors that are really consequential like sadness and anxiety. The variability itself is important.

        • Curious says:


          Agreed. I was simply trying to add some additional context as it’s something I once spent an inordinate amount of time thinking and writing about given the job I had at one time.

  15. Kaiser says:

    I’m guessing the economics literature is where this is at. It’s in econ classes that I remember making the assumption that humans are generally risk-averse, just like in (classical) econ classes, we should assume homo economicus. I recall “certainty equivalent” as an empirical way of measuring risk-aversion, although I can’t seem to find a paper that contains experimental results on this. The wiki entries on risk aversion and certainty equivalent contain equations but no mention of empirical support.

  16. Only yesterday I was looking for references on the _Information Avoidance_ phenomenon, where people will assign supposed a *positive* valuation to uncertainty. I am not across this field, but a review article by Russell Golman, David Hagmann, and George Loewenstein discusses some possibly examples of people actively seeking to avoid information (and logically therefore, seeking uncertainty)

    “Investors avoid looking at their financial portfolios when the stock market is down, an “ostrich effect” […] Individuals at risk for health conditions often eschew medical tests (e.g., for serious genetic conditions or STDs) even when the information is costless and should, logically, help them make better decisions […]. Managers often avoid hearing arguments that conflict with their preliminary decisions […], even when such arguments could help them avoid implementing measures that are ill-founded.
    These examples only scratch the surface of a wide variety of situations in which people avoid information.”

    If these effects are indeed as well-supported as Ellsberg Paradox-style results.then it would seem “it depends” is indeed a good answer

    • jim says:

      This brings up an interesting point:

      Seems like in this case people are avoiding the momentary market information from fear their emotions compel them to act erroneously, because they have better information that suggests the market will rise in the long term.

      So they’re not avoiding information, they’re weighing the quality of each source of information – today’s market price vs. the likelihood of strong long term returns – and acting according to the information that they deem the highest quality.

  17. Kevin H says:

    In some circumstances uncertainty may be highly desirable, such as not knowing the end of a story or the winner of a sporting competition.

    In others, much less so.

    I’ve wondered why do so many doctors opt to specialize in narrow fields rather than opt to be general practitioners. Mastery of a v narrow field is frequently preferred because general practitioners and ER physicians have to deal with much more uncertainty given their unfiltered patient populations.

    Society values hyper-specialization more, but arguably general practitioners and ER physicians should be valued more because they have to manage much greater uncertainty! I’m not aware of literature on this!

    • Martha (Smith) says:

      Although there might be an additional factor applying to general practitioners but not ER physicians: namely, the preference to have an ongoing relationship with patients.

  18. Tim Lynam says:

    The need for closure concept and associated measurement scale from social psychology may be useful or understanding human orientations to uncertainty. The Wikipedia article on it provides a useful summary and cites Kruglanski’s work (with others) on it.

    Quoting from there “The need for closure in social psychology is thought to be a fairly stable dispositional characteristic that can, nonetheless, be affected by situational factors. The Need for Closure Scale (NFCS) was developed by Arie Kruglanski, Donna Webster, and Adena Klem in 1993 and is designed to operationalize this construct and is presented as a unidimensional instrument possessing strong discriminant and predictive validity.”

    In reading over the comments so far I am intrigued by which academic disciplines are thought to own, or provide the last word, on the concept of uncertainty. Need for closure in practice?

  19. jim says:

    People don’t like uncertainty at all, which is why there is so much to be gained by being certain when other aren’t.

    Say Company X is tooling along just doing business. It’s stock price is X-price. But when Company Y makes an offer of 1.5X, Company X’s stock price almost instantly shoots up to ~1.5X. So 0.5X is the benefit of purchasing the stock when the value was “uncertain.” Once the offer of 1.5X is made, even though the deal might take months to close and face regulatory scrutiny, the only benefit to purchasing company X’s stock is the small difference between 1.5X and ~1.5X.

  20. Adam B says:

    Other people have mentioned Ellsberg’s experiment with urns, so I’d like to mention that there are studies showing similar effects in the animal literature.

    For example, Stagner, Laude, & Zentall (2012) trained pigeons to peck on stimuli in succession. First, the pigeons were given a choice between two initial stimuli, let’s call them A and B. If the pigeons pecked A, then, after a delay, one of two stimuli (let’s call them 1, 2) would appear. The stimulus 1 would appear only 20% of the time after A was pecked, and when it did the pigeon were always rewarded, whereas stimulus 2 would appear 80% of the time after A was pecked and the pigeons got nothing. When the initial stimulus B was pecked, either stimulus 3 or 4 appeared, and both of the stimuli were associated with reward 50% of the time.

    So, the pigeon was essentially given the following choice: do I peck A, and have a small chance of seeing a stimulus that means I will definitely get the sweet sweet grain, or do I peck B and have a moderate chance of getting grain, no matter what stimulus I see.

    The punchline is that the pigeons preferred the stimulus A. So they actually consistently chose the stimulus that lead to a much smaller chance of reward, but the reward was consistently signaled as opposed to random. In a way, the result is even more jarring than the Ellsberg experiment, in which humans were neither advantaged nor disadvantaged for choosing the informative option. The pigeons actually did “pay” for choosing the more informative option.

    The actual experiment is a bit more complicated and they tried to control for other possible explanations (as far as I can tell fairly well). I just tried to give a gist of it here.

  21. Simon says:

    Maybe this is too simple but:

    “which only proves that humans don’t like to be exposed to uncertainty without compensation” -> If you have to be compensated for a choice with no opportunity cost, it seems to mean you don’t like uncertainty, right? If people would be indifferent to uncertainty a 50-50% split would emerge, even without uncertainity. To me this seems enough to conclude: People don’t like uncertainty.

Leave a Reply