Chris:

Of course, violations of expected utility theory are extremely well-known in economics. I didn’t claim otherwise! In my above post, I’m siding with the economists, not the physicists. My only criticism of economics was in the way that utility theory is often presented in their textbooks. This is the same way that I criticize statisticians for their textbook presentations of null hypothesis significance testing. But, yes, economists fully understand that it could make sense to do a single bet without that being the same as committing oneself to a series of bets, and economists also understand that utility theory is just a model.

]]>Either partially ordered or intransitive preferences are not “rational.” But I think your example violates neither of these conditions, rather, it violates the independence axiom, which says that if you prefer A to B, then you must also prefer the lottery

pA + (1-p)C

to

pB + (1-p)C

for any lottery C.

This assumption is generally thought to be the most problematic. The famous Allais paradox illustrates how we can show it fails in experimental settings, for example.

By the way, in contrast to Andrew’s claims above, violations of expected utility theory are extremely well-known in economics. There’s a solid 70 years of research on exactly this sort of issue, and it is standard undergraduate fare.

]]>Supplementary information: https://static-content.springer.com/esm/art%3A10.1038%2Fs41567-020-01106-x/MediaObjects/41567_2020_1106_MOESM1_ESM.pdf

That’s quite a non-response: “I’m not sure where the disagreement lies”.

Sadly, the more I read about this the more relevant I find these remarks from Ben Golub linked above:

“Doctor et al. have done a generous thing, though unfortunately the learning will likely be lost on the EE crew itself. They are very committed to the bit, and the idea that their magic bullet will not restart all of economics is too bitter a fact to swallow.

“In their commitment to the hope that they will redirect a mature field with a simple, known idea (and without engaging with current work on the same issues), they embody the main feature of scientific cranks.”

]]>In fact, I never found utility theory necessary for most of economics – at least the parts I found useful. For the behavior of markets, importance of market structures and information, and other applied areas, utility theory is simply not necessary. It is used on the normative side of economics – and that is the side that finally convinced me to stop teaching economics. It is vitally important to analyze policies, and normative theories are certainly useful there – but the economic basis has always seemed quite narrow and limited to me. Mathematical formalism does not guarantee formation of good policy, nor is it necessary for deciding when/how/if to rely on market mechanisms or use policy to intervene in them. Yet, much economic theory is based upon that – that “free” markets maximize social welfare. That result, while mathematically pure (in that it can be derived from a set of assumptions), is far from the only way to make policy choices.

On the positive (as opposed to normative) side, then any theory that helps explain how people actually behave is useful. As you suggest, if EE does a better job of this, then it could be important. However, it seems to me that there are many reasons, outside of utility maximization, that might explain otherwise anomalous behaviors. I’m not convinced that there is a simple physical basis that works better than the many attempts to modify the expected utility framework to account for these.

]]>It seems it was actually related to the examples of what could you buy with the proceeds of the sale of your house or where could you go to try to multiply your money.

]]>Chris, James, thanks for your comments. I decided to go to the source and I read this: https://twitter.com/ole_b_peters/status/1293240720858505224

“Utility is not ergodic. That’s upsetting because it largely invalidates expected-utility theory. But it’s the way to a mathematically sound economic theory. The mathematics is not hard, though unfamiliar if you’ve studied economics. Read this 455-word note. Judge for yourself.”

This is my 695-word “summary”:

Utility is a function of wealth. According to expected utility theory, when individuals make financial decisions in the face of uncertainty they choose the course of action that maximizes their expected utility. For example, should you sell your house and buy bitcoins? Or imagine that you have to pay $50k to a loan shark at midnight and you just have $3k you got by selling your car at the pawn shop in front of the casino. How should you bet that money to optimize the probability of staying alive? Your utility function would be 1 if you have more than $50k at midnight, 0 otherwise.

But is it wise to take live-or-death decisions using utility, which is not ergodic?

We were talking above about choosing the course of action comparing the expected utilities under the different alternatives. Utility is ergodic if those expected utilities averaging over the potential outcomes in that one-off event are equal to the expected utilities over time when time goes to infinity.

There was no concept of time there, just the wealth at the end of the event. We could think about the evolution of wealth over time and calculate the average of utility when the length of the period goes to infinity. On the other hand, utility is a monotonic function of wealth and we don’t expect wealth to be ergodic, at least in the interesting cases where it changes in a meaningful way.

As it’s kind-of trivial that utility is not ergodic, we will in fact be looking at the change in utility instead. After all, maximizing expected utility at midnight is equivalent to maximizing the difference between expected utility at midnight and any arbitrary baseline like utility when we get the casino chips.

Still, we have a problem in that we’re looking at a “single-period” change in utility but we need an infinity of periods to average over time. So we need to construct an stochastic process for wealth. Preferably in a way that ensures that we can transform the wealth process to get an stationary series.

For example, let’s say that the wealth process is additive, with innovations being identically distributed at every period. For example, in each period it increases by 1 unit or stays unchanged with equal probability. Then we can just take utility to be equal to wealth and after differencing we have an stationary process. In every period, the difference is +1 or 0 with equal probability. Averaging over an infinite number of period we get, unsurprisingly, the average of +1 and 0 which is 0.5. The change in utility is ergodic. The change in wealth is also ergodic in this case.

But we could also assume that the wealth process is multiplicative, with innovations being identically distributed at every period. For example, in each period we multiply the previous wealth by 2 or it stays the same with equal probability. We have to be a bit smarter in this case, but if we take logarithms before differencing we get again an stationary process. If we define utility as the logarithm of wealth the change in utility is ergodic. In every period the difference is log(2) or log(1) and the average over time or over the probability distribution at any time is log(2)/2. The change in utility is ergodic again.

Actually slightly more general definitions of the wealth processes, like additive of multiplicative innovations with time-dependent parameters, there may be no solution. A combination of both, even with constant parameters, also seems problematic. Say in each period our wealth increases by 10% because we get consistent returns from our investments plus $100k we save from our never increasing salary that we will be receiving until the end of time.

Anyway, the two examples above are enough to show that utility – well, the change in utility – is not ergodic. Well, actually in those two particular examples it is ergodic if we define utility appropriately. But there is no single definition of utility that makes the change in utility stationary in those two arbitrary stochastic wealth processes.

That largely invalidates expected-utility theory, somehow.

]]>Hi Dale, I am not an economist so I can’t provide a great answer to your question. I can say that studying Ole Peters work was tremendously clarifying on some foundational concepts to me. However, I incline to agree that every time I encounter economic work (including an advanced natural resource economics grad course I did), I come away thinking “gee that’s a lot of mathematical formalism, but almost all the interesting questions here are philosophical and political”

I agree that the agenda of EE is unlikely to change that, although I think if they can show – empirically – that a lot of heuristic behavior that is “irrational” under standard utility optimization has a simpler rationale in maximizing growth rates over time, that is very valuable and thought-provoking…

If I were back in graduate school (so long ago that I’m not sure if ergodicity applies or not), I might be intensely interested in these developments. Even today, I might look into them a bit more. But after many years of being an economist, my gut reaction is that the relevant applications of economics are not likely to be decided by mathematical formulations such as these. In the end, almost all policy choices depend crucially on questionable assumptions such as interpersonal and intergenerational comparisons of utility – often using changes in wealth as proxies for changes in utility. I doubt that replacing expected utility with a physical representation (of what, I’m not sure) will somehow resolve any of these issues.

Early in my career I did some work involving time discounting and relevant exhaustible resource extraction policies under uncertainty. My “contribution” was to show that under uncertainty, the appropriate optimal policy might be to slow our rate of exhaustible resource depletion while markets would generally accelerate it. I did this within a framework of maximizing expected discounted social welfare. Once, when presenting this work to an esteemed economist, they kept asking a single question: did I believe it was appropriate to discount social welfare over time. My answer – that even with discounting, I got my result – was unsatisfactory at the time, and is even more so on reflection.

I use this example to suggest that the issues involved with economics and its applications are more likely matters involving philosophy, sociology, politics, and psychology than matters of physics and economics. I invite others to show me that I am wrong – I sincerely might be. But my gut reaction tells me that my time is better spent elsewhere than doing a deep dive into ergodicity. Peter Dorman – are you out there?

]]>Disclaimer: I might be biased towards Ergodicity Economics (EE), of course, because I understand it in some depth, but I am open to learn from such discussions.

It surprises me that someone like you seems to judge only from a journalistic article about the whole topic of the research programme of EE.

Your argumente of the difference between what §100 mean for a pauper and a rich man was already Bernoulli’s motivation behind the introduction of a utility function. Isn’t it completely endogenised as soon as you look at wealth changes and the welath dynamic as is done in EE and not only at lottery payouts?

Sure, following the time perspective and calculate a time average makes an implicit assumption, that the DM will encounter similar situations throughout his life. Here comes the central limit theorem and ergodicity with it through the backdoor of the T to infinity limit. But remember, the expectation value also relies on a limit, namely the N to infinity limit, which comes with other implicit assumptions which can turn out unrealistic as well. In the end, decision theorists are merely looking for a good enough (null) model. There are many reason why the Expectation Value based theories might not be good models and vice versa.

You write:

“And the above analysis is relying entirely on the value of $100, without ever specifying the scenario in which the bet is applied.”

Actually, EE specifies exactly the scenario or environment to which a bet is subject to. The environment is simply the DM’s wealth level and the relevant wealth dynamics. What else is there to specifiy?

From only skim-reading sec. 5 of the article you’ve referenced, I can not see how the introduction of another (possibly psychologically loaded) effect called “fear of undertainty” is any different from the research programme in decision theory since Bernoulli and more so since Kahneman & Tversky of introducing ever new psychological effects?

I am delighted to hear your answers and welcome you to discuss these issues in more depth from Jan 18-20 at the Ergodicity Economics Online Conference http://lml.org.uk/ee2021/.

Best

Mark

Carlos, yes you are right. To bring it in line with the example from Peters. There is an asset which generates a risky return and a safe asset, you choose to consume a fraction of the assets and allocation between the two assets in each time period (the residual you don’t consume is reinvested in the assets). Asset growth between periods depends on the amount one chooses to consume and the realization of the returns, and the portfolio allocation. The fraction you choose to consume is a function of the riskiness of the asset and your risk aversion (which as is pointed out elsewhere in this thread, is a problematic concept). Peters is pointing out that the fraction one consumes is quite different if the asset returns are additive (ergodic) or multiplicative (non ergodic). Actually I am not sure of the maths here. Perhaps the formula still work if you take the process to infinity in time rather than averaged over infinite possible assets in a single period? But at any rate you end up with quite different optimal consumption paths. Peters claim is that you can ditch the utility function/risk aversion and you should and people do choose the consumption share and allocation between the two assets that maximizes the growth rate in consumption.

I agree with Andrew that it is wrong to discard risk aversion/utility concepts in entirety. But I think Peters is onto something by questioning the notion of expectation that is used in these models. In particular, its appropriate to think about an individual’s wealth over their life as a time average not a “ensemble average”.

Somebody, you can also cash this out in Bayesian terms if you don’t like possible world talk. For instance, you have a prior on the data generating process (the risky risky asset returns) but don’t know the odds with certainty. I don’t think Bayes vs classical stats is essential to this problem.

]]>This is something up with wich I will not put.

]]>gah, max(ind_wealth)/sum(wealth) rather :) i.e. bounded above by 1

If ruin is possible, in the limit all wealth remaining held by a single individual, and the probability of being that individual is infinitismal.

]]>Note that if you are the *house* offering this gamble to an ensemble of players, you DO care about the ensemble expectation. It’s a bad bet for you to offer as the house, since the expected value of the return *across the ensemble of gamblers* is positive.

So, this is a clever example of a gamble that is bad both for any given individual to take, and for the house to give!

I haven’t worked out the math, but the relevant convergence result I am intuiting occurs as the fraction of the total wealth to the wealth of the wealthiest gambler in the ensemble goes to some large fraction which asymptotically approaches 1:

sum(wealth)/max(ind_wealth) -> f, and for any given value of f at some time t, f_t, f_t < f_t+dt.

Haha, fair enough!

Here’s what it boils down to AFAICT. Ole Peters argues that the expectation value of a non-ergodic observable (say the amount of wealth per se, in this case undergoing some kind of multiplicative dynamic) should *NOT* be used, but that the multiplicative growth rate (in this case 0.95) *IS* an ergodic observable, and so it’s expectation value is meaningful outside of a multiverse of oneself.

The exponential growth rate in the coin-flipping gamble is ~0.95 < 1, and is therefore not a good bet to take either once or any number of times.

Why not even once? Well, I think the idea is that you are evaluating a gambling *strategy*, and if you consistently took bets of this sort (where the ensemble expectation was positive, but the time-average growth rate was not) your wealth over time would entrain on a downward trajectory.

If the gamble were merely additive, things are different.

]]>> Perhaps my use of the term ruin is confusing, I don’t mean that you lose all of your wealth necessarily, just your bankroll. Obviously, what people consider their bankroll will vary, but the point is that if you lose your bankroll you are ruined in the sense that you can’t play anymore.

In that case I really don’t get what’s your line of reasoning.

> People seem to be hung up on the $ amount of the bet, and that the $100 isn’t significant to them or to many people, but that seems irrelevant to me.

How large or small $100 is relative to your total wealth is what determines if, in order to maximize the expected geometric growth rate of your wealth, it’s better to bet or not to do it.

You said that there is an optimal bet size. It will be x% of your wealth.

Let’s say you are given an option to play that game once with a $100 bet.

a) If x% of your wealth is $100 then you want to play. $100 is exactly the optimal amount to play.

b) If x% of your wealth is more than $100 you also want to play. You’d like to bet more but $100 is better than nothing.

c) If x% of your wealth is between $50 and $100 the $100 bet is again suboptimal. And, in this case, riskier. You would have prefered to bet a bit less. But to maximize growth you still prefer to play.

d) If x% of your wealth is below $50, the $100 bet is definitely too high. You pass.

]]>> So we are talking about the maximization of wealth. Wealth is maximized via the Kelly Criterion. Any amount wagered that diverges from what the Kelly Criterion spits out is not optimal and therefore doesn’t maximize wealth.

That’s not correct. It doesn’t maximize wealth, it maximizes the _logarithm_ of wealth.

> Now if you bet less than Kelly while you don’t maximize wealth you at least aren’t inviting ruin, so I would be willing to make bets for less than Kelly. But the minute you exceed Kelly you invite ruin, so yes that’s a bad bet and I would never do it.

That’s not correct. It’s the minute you exceed _twice_ Kelly that you will go down to zero.

]]>> Any bet where you bet your entire bankroll is a bad bet unless your probability of winning is 100%. If your probability of winning is not 100%, then the optimal bet size will be less than 100% of your bankroll, so betting 100% of your bankroll means ruin eventually.

That’s not correct. Imagine you have 99.9% probability of a 1000% gain, 0.1% probability of a 1% loss. The optimal bet size would be close to 100%. Betting 100% of your bankroll would be suboptimal but would not “mean ruin eventually”. That happens when you bet twice the optimal bet size.

]]>> economists aren’t measuring wealth optimization correctly. By simply looking at the average expected return

Not the expected return, the expected utility of the return. Peter’s proposal for this situation is completely equivalent to a logarithmic utility of wealth, which is an example of economists’ expected utility framework

]]>Min:

What is a “bankroll”? I don’t think this term is part of utility theory. If my bankroll is $100, then I might win a bunch of money or I might lose most of $100. That doesn’t seem like such a bad tradeoff at all!

]]>Tbw:

You write, “If you commit to play the first time you should be willing to play forever, since the wager doesn’t change.” That’s not true. The wager does change. The amount of the bet changes. The first time, you’re betting $100. The second time, you’re betting $150 or $60. That’s not much different, but for the umpteenth bet, you might be betting $1,000,000. That can make a difference.

There’s also the time component. At some point you want to spend the money. If you play forever, you never get to spend the money, which makes the economic analysis moot.

]]>Perhaps my use of the term ruin is confusing, I don’t mean that you lose all of your wealth necessarily, just your bankroll. Obviously, what people consider their bankroll will vary, but the point is that if you lose your bankroll you are ruined in the sense that you can’t play anymore.

People seem to be hung up on the $ amount of the bet, and that the $100 isn’t significant to them or to many people, but that seems irrelevant to me. It seems to me that Peters’ point is that economists aren’t measuring wealth optimization correctly. By simply looking at the average expected return they are ignoring the fact that in an example like he lays out the expected positive return is generated by an extremely thin slice of the population, while the rest of the population loses their money. Perhaps the wealth of the full population of people is maximized that way, but certainly not the wealth of an individual, as the vast majority of the population will lose their $100. So, why would we expect an individual to make a decision to maximize the wealth of a larger group at their own expense?

One of the arguments here is why wouldn’t you play the game once, since it has a positive expected value? Ok fine. We played once. Why would you not play a 2nd time? The bet still has a positive expected value. Then why not a 3rd time? We’re talking about independent events. If you commit to play the first time you should be willing to play forever, since the wager doesn’t change. And yet if you do, you go broke. The only answer is not to play.

]]>> “It makes _what_ a bad bet?” – the game we are talking about, where you bet on the coin flip and lose 40% or win 50%. If you have to bet all of your bankroll it is a bad bet.

If there is _one_ game we’re talking about, it’s the one described in the Bloomberg article quoted by Andrew:

“Consider a simple coin-flip game, which Peters uses to illustrate his point. Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total.”

I cannot imagine that anyone would read that description and think that there is an implicit “Starting with the whole of your fortune which amounts to the hefty sum of $100” there. Everyone will understand that $100 is the size of the initial bet, unrelated to theire own level of wealth which will typically be orders of magnitude higher.

I guess that the article doesn’t properly describe what Peters example is about. I’ve not read carefully the example in the document linked from the Bloomberg article but the Nature Physics example is clearly different:

“For example, a gamble can model the following situation: toss a coin, and for heads you win 50% of your current wealth, for tails you lose 40%”.

I think we can agree that there are different games being discussed in the comments, so it’s useful to be clear about what we’re talking about.

]]>Tbw:

You quote: “Peters takes aim at expected utility theory, the bedrock that modern economics is built on. It explains that when we make decisions, we conduct a cost-benefit analysis and try to choose the option that maximizes our wealth.”

But that’s wrong. Expected utility theory (which, incidentally, can simply be called utility theory, as one of the consequences of the theory is that utility is the same as expected utility) does *not assume that we’re conducting a cost-benefit analysis when we make decisions. It’s a theory that says that for decisions to be coherent, we have to act **as if* we’re doing these cost-benefit analyses—but the field of economics recognizes that we can’t actually be doing so. Utility theory represents an ideal of coherence, not a description of how decisions are made.

Regarding the other points, it depends how many bets will be made. The number of bets can’t be infinity because we have a finite lifespan and at some point we might want to spend that damn money. The appropriate decision can depend on the number of bets. It can make sense to do it for 1 round or 20 rounds but not for 1000 rounds.

]]>So, I think there are 3 issues you are raising 1) what if it’s a single bet rather than an infinite series 2) the amount of money is insignificant so ruin is off the table and 3) my definition of a “bad bet” is extreme and also merely my definition.

Regarding point #3, the article states:

Peters takes aim at expected utility theory, the bedrock that modern economics is built on. It explains that when we make decisions, we conduct a cost-benefit analysis and try to choose the option that maximizes our wealth.

So we are talking about the maximization of wealth. Wealth is maximized via the Kelly Criterion. Any amount wagered that diverges from what the Kelly Criterion spits out is not optimal and therefore doesn’t maximize wealth. Now if you bet less than Kelly while you don’t maximize wealth you at least aren’t inviting ruin, so I would be willing to make bets for less than Kelly. But the minute you exceed Kelly you invite ruin, so yes that’s a bad bet and I would never do it. Yes, that’s a strong rule, but a necessary one if you are going to maximize wealth which is what this is all about isn’t it? In the bet described, the optimum amount to bet according to Kelly is negative, so you shouldn’t bet.

As for point #2, I again go back to the fact that we are talking about wealth maximization, the fact that $100 won’t ruin you doesn’t really matter. Losing $100 doesn’t maximize your wealth either.

As for point #1, the single bet instead of a series, I think my beef with using the expected value boils down to this, it simply isn’t enough information in that it doesn’t distinguish between wildly different wagers. For example, a $100 bet that returns 20% for winning and -10% for losing, or 110% for winning and -100% for losing also have the same expected value as the 50%/-40% in the article. But in effect one is a $10 wager at 2 to 1 odds, one is a $100 wager at 1.1 to 1 odds and the example in the article is a $40 wager at 1.25 to 1 odds. What matters is not just the 10% favorable spread in the payouts, but how much you have to wager to get that 10% spread. The variance in the returns is critically important, and expected value just sweeps that under the rug and ignores it.

As for sandwiches, I think a better analogy is that while eating one monte cristo sandwich won’t kill you, perhaps eating one every day will eventually kill you. As a general rule of thumb evaluating diet choices by saying what if I eat this every day probably isn’t a bad starting place. I’m not advocating being a monster and never eating a Reuben, or never making a single questionable or bad bet, but as a guiding principle, I think looking at the long-term impact of repeating a decision over and over is a good place to start evaluating if it is really a good decision.

]]>> I think you could invoke a convergence argument here

https://i2.wp.com/www.mindcharity.co.uk/wp-content/uploads/2017/03/cartoon-science-communication.gif

;-)

]]>Yes, well you get the ‘time average’ by taking limits T -> Inf. However, I think you could invoke a convergence argument here. If the system is non-ergodic (T_avg != E_avg), you arguably prefer one over the other even with a finite sequence. You will have to account for stopping rules, etc.

But this segues to the question of evaluating uncertainty *in the time dynamical formulation that they want*. I asked Ole about this on Twitter, and never got a satisfying response (IMO). Temporal frequentism gets in the way at this point :)

> So he would take the bet as given, embed it in a long series

A “long series” wouldn’t cut it. It should be an _infinite_ series. A finite sequence of events/bets can be expressed as a one-off event/bet.

]]>Carlos, thanks for the reference. :)

]]>It would have been more realistic in terms of investment or gambling if the initial stake had been something like $35,000 where you stand to win $17,500 or lose $14,000. One problem with that is that such large amounts of money are surely not proxies for utils.

]]>Right. One flaw of the proposed game is that it is an infinite sequence of bets in which you always put up your bankroll.

]]>Tbw:

But this is one of the points of my above post. The concept of “your bankroll” is itself artificial. I can lose $100 and I’ll be just fine. Losing $100 is not “ruin.” Also, regarding “over time,” you have to specify how many times the game will be played. Eating one sandwich is healthy and not at all irrational; committing to eat 1000 normal-sized sandwiches within the course of one day is a bad idea and could well kill a person.

Also, your definition of a “bad bet” is . . . well, as Phil might say, it’s your definition so you can define it however you want, but the idea that there’s some “optimal bet” and that any bet is “bad” if it’s more than that amount . . . that’a a pretty strong rule. In the setting being described here, the bet in question is evaluated not with respect to an optimum but to the alternative of not betting. If you’d rather not risk any of your $100, that’s your call, but I think it makes perfect sense to risk some or all of $100 for a positive expected-value bet.

]]>That equation is one property of the solution. The thing being optimized is the total discounted utility of a sequence of consumptions c(1), c(2), …, c(T). Something like

Expected_Value_as_of_t=1 [ Sum_from_t=1_to_t=T [ discount_factor^t * utility( c(t) ) ] ]

So yes, it’s an average of something across what we consider (at t=1) that are possible “states of the world”.

That something that we are averaging is the the sum across time of something else: the discounted utility at each time in that “state of the world”.

Why do you say we’re assuming an ergodic process when we calculate an expected value over a “distribution of states of the world”?

What ergodic process would that be?

An infinite, stationary process that has an equilibrium distribution such that the values are in the set that contains the total discounted utility (in our original “time” dimension) of every possible “state of the world” { Sum_from_t=1_to_t=T [ discount_factor^t * utility( c(t) ) ] } with a probability equal to the probability of that “state of the world” in our original formulation?

]]>> you are averaging across possible states of the world, so you are assuming an ergodic process

Don’t believe everything you read on the internet. This is nonsense. Only an explicitly anti-Bayesian philosophy requires repeated trials across time to interpret an average, and given that this is an explicitly Bayesian blog you won’t get much traction here. But even if you accept that probabilistic evaluations of one-time events are meaningless, these are philosophical positions, not “assumptions” in the mathematical sense of the word.

]]>Not 100% sure of this, but when you derive the optimal consumption from u`(c_t) = βE_t[v`(a_{t+1}R_{t+1}] you are averaging across possible states of the world, so you are assuming an ergodic process. Excuse the horrible notation.

]]>The optimal bet is so small for something like Powerball that you need to have a bankroll of hundreds of millions to make buying even 1 ticket optimal.

]]>“It makes _what_ a bad bet?” – the game we are talking about, where you bet on the coin flip and lose 40% or win 50%. If you have to bet all of your bankroll it is a bad bet. Any bet where you bet your entire bankroll is a bad bet unless your probability of winning is 100%. If your probability of winning is not 100%, then the optimal bet size will be less than 100% of your bankroll, so betting 100% of your bankroll means ruin eventually. We know there is only a 50% chance of winning this game described above, so the optimal bet size cannot be 100%. Therefore, it must be a bad bet.

Yes, one time is a bad bet. If you define bad bet as any bet which will lead to ruin over time. The only good bet is one with a positive expectation and made in an optimal amount or less, but never more than the optimal amount.

Sure, you won’t go broke if you just play one time, but that doesn’t make it a good bet. It’s not like playing one spin of roulette magically makes it a good bet instead of a bad bet.

]]>It would be better to have Ole speak for himself, or failing that, check out the lecture notes on Ergodicity Economics website.

I don’t have a horse in this race (hah!)- I just found working through his arguments clarifying on both ergodicity and probability. In the end, I don’t entirely agree with what they’re up to.

The impression I have in answer to your question is that Ole thinks the only meaningful application of probability is to evaluate a series or sequence of events/bets, not one-off events/bets. So he would take the bet as given, embed it in a long series, note the time average growth rate < 1, and say "don't take the bet!"

I don't entirely agree. But I don't entirely disagree either!

I brought up the case of the lottery with expected payout ~$3billion to Andrew above to illustrate this with an extreme case. The expected value of the ticket was positive. And yet! And yet…it is a surefire donation to the lottery system, no? If there is no large ensemble of versions of myself buying an inordinate # of tickets for that 'one off' event, do I care about the positive ensemble expectation? Why would I?

How many lottery tickets should I buy if they cost $1 each (inclusive of opportunity cost let's say), and the odds are 1:2.8 billion?

]]>> But if the stopping condition is when your bankroll drops below 5¢, everybody stops with 5¢ or less, or dies without cashing out.

Under these conditions any bet is bad.

Say it’s a coin flip and you lose 10% or win 100%. It’s a bad bet, even though the geometrical average is positive.

Hey, even if it’s a sure gain of 100% in every flip it’s a bad bet.

]]>“how many times would you have to commit to play for it to become a bad bet?”

If the stopping condition is some finite number of times and you can afford to lose your original bankroll, then go for it. But if the stopping condition is when your bankroll drops below 5¢, everybody stops with 5¢ or less, or dies without cashing out. Hmmm. Let’s put a stopping condition on the upside. Say that you cash out if your bankroll is $1,000 or more, then it’s probably a good bet, eh?

]]>> So if you do not have access to Many Worlds copies of yourself, according to Peters, this is a bad bet.

What about the single bet case? Is the “50% gain / 40% loss” coin flip a bad bet, according to Peters, because I do not have access to Many Worlds copies of myself?

> His thesis is that in any non-ergodicity growth situation, you always go with the time average, which describes what happens to a singular ‘typical’ trajectory.

What’s the definition of a non-ergodicity growth situation? As opposed to what? Unfortunately the Nature Physics paper doesn’t explain much.

Growth rate optimization has been discussed, and used, for decades before anybody thought of calling it ‘ergodicity economics’. Maybe he could have included some references in the paper. He mentions just that it’s “well known among gamblers as Kelly’s criterion” but it has been also proposed by statisticians like Breiman or economists like Markowitz.

I think Markowitz advocates for it more convincingly in his 1976 article “Investment for the long run: New evidence for an old rule”. His argument doesn’t require all the periods to be identical or an infinite number of them. He discusses how to define asymptotic optimality. If the game never ends the final wealth is undefined.

http://finance.martinsewell.com/money-management/Markowitz1976.pdf

]]>Carlos, the time average growth rate for a trajectory taking a long series of this bet is around 0.95 IIRC. This contrasts with the ensemble expectation which is 1.05. So if you do not have access to Many Worlds copies of yourself, according to Peters, this is a bad bet. His thesis is that in any non-ergodicity growth situation, you always go with the time average, which describes what happens to a singular ‘typical’ trajectory.

What I was asking Andrew is how he thought about evaluating the middle ground – where you are taking more than one bet in sequence quite possibly, but not so many that the convergence to time average is plausible. I think you just have to try and solve explicitly for the distro and go from there…

you don’t gain any performance boost from it (depending on the setting, of course).

[…]

Specifying preferences or introducing irrational behavior into the models in a tractable way matters much less than you’d think.

Can you provide a source for this? I would believe the models just have too many parameters in that case, but not that one properly modelling use of heuristics would have poorer performance in principle.

]]>Basic economic principles state that both risk (e.g. std dev) and expected return should matter. For example, in the case of quadratic utility, E(u) = E(x) – 0.5*a*var(x), where a is “risk-aversion” coefficient. In this world, a heuristic like Sharpe ratio E(x)/Sd(x) can be useful to compare bets.

That said, I deeply distrust graduate student experiments involving imagined sums of money or small amounts of money.

]]>Yes, actually, if one quits after 2 rounds, one wins on average. One only wins in 1/4 of scenarios, and loses in 3/4 of scenarios, but the average is higher: 225+90+90+36=441 = 4*110.25.

And then when one multiplies the ratio by the result of that, the multiplication is distributive, so we can multiply by the average.

So, it follows that I have been wrong. My analysis of financial aspect of it has been wrong.

What’s really going on is that more and more rounds you play with total on the table, informally and not quite correctly speaking, the limit is a “lottery” – your top gain can be very big (exponential), but the probability that you will be in the net positive tends to 0 (the average ratio is less than 1).

So I was wrong about psychology of this too; when people play lottery or not play lottery with high payout only (no small payouts), they usually just think “big gain, small chance” vs “small loss, large chance”, and not about the average. (So the effect we see here is comparable to something like: sell 900,000 tickets at 1 dollar each, and pay out 1 million to one ticket, or sell 1,100,000 tickets at 1 dollar and pay out 1 million to 1 ticket; most people would decide whether to play each of these two versions or to refuse with equal chances, regardless of this difference in expectation. So they do see that they will almost certainly lose, and they are aware of a potentially huge upside, but they don’t actually compute the averages before deciding.)

Yes, so you are right – if one keeps it on the table, it is essentially a lottery profitable to the player and losing to the house on average (but the chances of winning become smaller as the number of rounds become bigger).

]]>I guess you agree with my calculation of (arithmetic) average. I agree with you that the geometric average may be more relevant depending on unspecified details. As you said, the problem is ambiguos. That doesn’t change the fact that the expected return is positive.

As for the house cheating, if they were offering a two-rounds $100 game to anyone who wanted to take it, wouldn’t they be losing on average $10.25 per game? Wouldn’t the players be getting a $10.25 profit on average?

]]>You think in terms of ratios. When you win, you gain 50%, so the ratio is 1.5:1 = 3/2.

When you lose, you lose 40%, so the ratio is 1:0.6=5/3 > 3/2. So the loss is actually worse than the gain, the ratio is less favorable.

If you want to average in an additive way, you need to consider logarithms of those ratios. The result is a product; if you want to treat is as a sum, you need to convert to logarithms (and then convert back to see what is the actual effect of the average).

(All this assumes no replenishment of losses and no taking out the gains, the whole total is kept on the table. People are correctly assuming that the house is cheating, misrepresenting a bad deal as a good one, so they refuse to play.)

]]>> Sure it’s a positive expectation bet, but the condition of having to wager all of your bankroll each time, makes it a bad bet.

It makes _what_ a bad bet?

Playing one time is a bad bet?

Otherwise, how many times would you have to commit to play for it to become a bad bet?

]]>To find the geometric mean, multiply 1.5 by 0.6 and take the square root. When is that appropriate? When you are interested in the expected return on investment. And that is how the question is presented:

“Starting with $100, your bankroll increases 50% every time you flip heads. But if the coin lands on tails, you lose 40% of your total.”

You don’t put up $100 every time you bet, you put up your bankroll. If every time you bet you win $50 on heads and lose $40 on tails, it’s great. If every time you bet you win 50% of your bankroll on heads and lose 40% of your bankroll on tails, it’s lousy.

]]>This is just a fat-tails thing. The reason that it is profitable “on average” is that there will always be some lucky person who wins a lot, while everyone else loses. But that average profitability is meaningless since no one will ever experience it. The median experience is that you lose all of your money. It’s the same as the Powerball, sure sometimes the Powerball has a positive expected value, but it is meaningless because the probability of winning is so low. If you run the Kelly Criterion on it, you find that it would recommend that if you have a bankroll of say $500 million, then it is worth buying one $2 ticket. But effectively even though it is a positive expectation bet, it is still a bad bet. Same with this. Sure it’s a positive expectation bet, but the condition of having to wager all of your bankroll each time, makes it a bad bet.

]]>