Skip to content

(1) The misplaced burden of proof, and (2) selection bias: Two reasons for the persistence of hype in tech and science reporting

Palko points to this post by Jeffrey Funk, “What’s Behind Technological Hype?”

I’ll quote extensively from Funk’s post, but first I want to make a more general point about the burden of proof in scientific discussions.

What happens is that a researcher or team of researchers makes a strong claim that is not well supported by evidence. But the claim gets published, perhaps in a prestigious journal such as the Journal of Theoretical Biology or PNAS or Lancet or the American Economic Review or Psychological Science. The authors may well be completely sincere in their belief that they’ve demonstrated something important, but their data don’t really back up their claim. Later on, the claims are criticized: outside researchers look carefully at the published material and point out that the evidence isn’t really there. Fine. The problem arises when the critics are then held to a higher standard: it’s not enough for them to point out that the original paper did not offer strong evidence for its striking claim; the critics are asked to (impossibly) prove that the claimed effect cannot possibly be true.

It’s a sort of Cheshire Cat phenomenon: Original researchers propose a striking and noteworthy (i.e., not completely obvious) idea, which is published and given major publicity based on purportedly strong statistical and experimental evidence. The strong evidence turns out not to be there, but—like the smile of the Cheshire cat—the claim remains even after the evidence has disappeared.

This is related to what we’ve called the “research incumbency advantage” (the widespread attitude that a published claim is considered true unless conclusively proved otherwise), and the “time-reversal heuristic” (my suggestion to suppose that the counter-argument or failed replication came first, with the splashy study following after).

Now to Funk’s post on technological hype:

Start-up losses are mounting and innovation is slowing. . . . The large losses are easily explained: extreme levels of hype about new technologies, and too many investors willing to believe it. . . . The media, with help from the financial sector, supports the hype, offering logical reasons for the [stock] price increases and creating a narrative that encourages still more increases. . . .

The [recent] narrative began with Ray Kurzweil’s 2005 book, The Singularity is Near, and has expanded with bestsellers such as Erik Brynjolfsson and Andrew McAfee’s Race Against the Machine (2012), Peter Diamandis and Steven Kotler’s Abundance (2012), and Martin Ford’s The Rise of the Robots (2015). Supported by soaring venture capitalist investments and a rising stock market, the world described in these books is one of rapid and disruptive technological change that will soon lead to great prosperity and perhaps massive unemployment. The media has amplified this message even as evidence of rising productivity or unemployment has yet to emerge.

Here I [Funk] discuss economic data showing that many highly touted new technologies are seriously over-hyped, a phenomenon driven by online news and the professional incentives of those involved in promoting innovation and entrepreneurship. This hype comes at a cost—not only in the form of record losses by start-ups, but in their inability to pursue alternative designs and find more productive and profitable opportunities . . .

These indicators are widely ignored, in part because we are distracted by information appearing to carry a more positive message. The number of patent applications and patent awards has increased about sixfold since 1984, and over the past 10 years the number of scientific papers has doubled. The stock market has tripled in value since 2008. Investments by US venture capitalists have risen about sixfold since 2001 . . . Such upward trends are often used to hype the economic potential of new technologies, but in fact rising patent activity, scientific publication, stock market value, and venture capital investment are all poor indicators of innovativeness.

One reason they are poor indicators is that they don’t consider the record-high losses for start-ups, the lack of innovations for large sectors of the economy such as housing, and the small range of technologies being successfully commercialized by either start-ups or existing firms. . . .

Funk then talks about the sources of hype:

For more recent technologies such as artificial intelligence, a major source of hype is the tendency of tech analysts to extrapolate from one or two highly valued yet unprofitable start-ups to total disruptions of entire sectors. For example, in its report Artificial Intelligence: The Next Digital Frontier? the McKinsey Global Institute extrapolated from the purported success of two early AI start-ups, DeepMind and Nest Labs, both subsidiaries of Alphabet (Google’s parent company), to a 10% reduction in total energy usage in the United Kingdom and other countries. However, other evidence for these purported energy reductions in data centers and homes are nowhere to be found, and the start-ups are currently a long way from profitability. Alphabet reported losses of approximately $580 million in 2017 for DeepMind and $569 million in 2018 for Nest Labs. . . .

Hype and its amplification come from many quarters: not only the financial community but also entrepreneurs, venture capitalists, consultants, scientists, engineers, and universities. . . .

Ya think??

Funk continues:

Online tech-hyping articles are now driven by the same dynamics as fake news. Journalists, bloggers, and websites prioritize page views and therefore say more positive things to attract viewers, while social media works as an amplifier. Journalists become “content marketers,” often hired by start-ups and universities to promote new technologies. Entrepreneurs, venture capitalists, university public relation offices, entrepreneurship programs, and professors who benefit from the promotion of new technologies all end up sharing an interest in increasing the level of hype. . . .

And this connects to the point I made at the beginning of this post. Once a hyped idea gets out there, it’s the default, and merely removing the evidence in favor is not enough. Mars One, Hyperloop, etc.: sure, eventually they fade, but in the meantime they suck up media attention and $$$, in part because they become the default, and the burden of proof is on the skeptics.

Selection bias in tech and science reporting

One other thing: the remark that journalists etc. “say more positive things to attract viewers” reminds me of what I’ve written about selection bias in science reporting (see also here). Lots of science reporters want to do the right thing, and, yes, they want clicks and they want to report positive stories—I too would be much more interested to read or write about a cure for cancer than about some bogus bit of noise mining—and these reporters will steer away from junk science. But here’s where the selection bias comes in: other, less savvy or selective or scrupulous reporters will jump in and hype the junk. So, with rare exceptions (some studies are so bad and so juicy that they just beg to be publicly debunked), the bad studies get promoted by the clueless journalists, and the negative reports don’t get written.

My point here is that selection bias can give us a sort of Gresham effect, even without any journalists knowingly hyping anything of low quality.


  1. Marcus says:

    There is this lovely example from just yesterday: The press release claims that a “gratitude intervention” has an effect on workplace incivility (, but the actual paper ( reports – in both samples – no significant correlation between the intervention and workplace incivility.

    • Andrew says:


      I followed the link, and . . . wow. Just wow. Here’s the summary of results from experiment 1:

      To test H1 through H4, we used multiple mediation analyses as outlined by Hayes (2017) usingMplus 8.0 (Muthén & Muthén, 2017). Hypotheses were tested using 10,000 bootstrapped samples and 95% bias-corrected and accelerated confidence intervals. Our hypotheses proposed that a gratitude intervention would decrease incivility through the mech-anisms of prosocial motivation (H1), relationship closeness (H2),self-control resources (H3), and perceived organizational support(H4). Table 3 shows the results for the multiple mediation analyses. Results demonstrated the indirect effect of the gratitude intervention via prosocial motivation was not significant, as the confidence interval contained zero (ab = -.01, 95% CI [-.04, .00]). Thus, H1 was not supported. Results similarly indicated a nonsignificant indirect effect through relationship closeness (ab = .00, 95% CI [-.04, .03]). Thus, H2 was not supported. However, results revealed support for H3; the confidence interval for the indirect effect via self-control resources did not contain zero (ab = -.10, 95% CI [-.20,-.01]). Finally, H4 predicted that POS would carry the influence of a gratitude intervention to incivility. This hypothesis was not supported, as the indirect effect was not significant (ab = .02, 95% CI [-.01, .08]).

      But, the abstract claimed, “the intervention decreased mistreatment (as reported by coworkers) by enhancing self-control resources.” I guess they were lucky that one of those comparisons reached the p less than 0.05 level so they could report success. Otherwise they would’ve been reduced to declaring victory at p less than 0.10 and publishing in PNAS!

      • Anonymous says:

        In study 2 the effects (such as they are) are in the wrong direction and all of the mediation analyses fall apart once you consider the endogeneity issues affecting the relationships between mediator and DV. This is supposedly the best journal in my field.

        • Andrew says:


          It makes you wonder how this got published at all. But then you remember . . . “peer review”! The peers of the researchers who do this sort of work have the same misconceptions about the role of data and methods in scientific learning. Anyway, the title and abstract don’t look so bad, so if you’re taking the perspective that you have to show absolute trust in the authors (otherwise you’re like the Stasi, engaging in destructive criticism, as Cass Sunstein, Steven Pinker, etc., might say), then I guess you have little choice but to accept the paper.

          This kind of thing makes me wonder what the papers look like that aren’t accepted by the journal. Maybe those papers more clearly state in the abstract that their results are noisy, and then the editors decide that the work isn’t important enough to be published in their important journal?

          • Martha (Smith) says:

            Clearly (at least to many of us), the system of “peer review” is not working. But how do we actually implement a better alternative? I’m not saying that there are no “better alternatives”; my concern is that when a better alternative is proposed, people will react with criticisms of the nature that the requirements are not specific enough, and thus proceed to throw the baby out with the bath water. It’s gonna be a tough sell.

            • Andrew says:


              My preferred alternative is that everything goes on preprint servers, and then journals are an overlay on that, a system for improving and recommending papers. The idea is to separate the functions of publishing, criticizing, improving, and endorsing papers.

              I don’t have any evidence that this system would work well. It just seems to make sense to me.

              • Brent Hutto says:

                If we’re dreaming…

                The role of a “reviewer” or “editor” would be akin to a co-author or collaborator. The idea should be (in my most idealistic scenario) that everyone involved with either the initial writing or the paper’s “publication” is pulling together to create the best possible manuscript. Nobody ought to view their role as any kind of gatekeeper (again, being idealistic I know).

                If that were the normal production process, when outside post-publication critiques are offered or even if outright mistakes are identified after publication there’s no sense that the paper has passed into a curated, approved, official and permanent form. Therefore it would not be viewed as too late to do anything other than retract the paper or front it by attacking the critique authors.

  2. Matt Skaggs says:

    From Funk:
    “…extreme levels of hype about new technologies, and too many investors willing to believe it. . . The media, with help from the financial sector, supports the hype, offering logical reasons for the [stock] price increases and creating a narrative that encourages still more increases. . . ”

    If I make a couple slight changes, we are back in the 19th Century:

    “extreme levels of hype about new [mines], and too many investors willing to believe it. . . . The media, with help from the financial sector, supports the hype, offering logical reasons for the [mine share] price increases and creating a narrative that encourages still more increases. . .”

    New technology is now the domain of old-fashioned hucksters…looking at you Elon Musk!

    • It definitely feels like since about 1998 and the dot com boom, we’ve been living in one California Gold Rush after another with relatively little to show for it.

      Sure technology has improved, but I don’t think the gold rush attitudes really caused that. Technology was improving at similar rates between 1970 and 1999, if you just extrapolate forward you’d get smart phones and 802.11ac and 4G LTE and hybrid vehicles and video distribution over the internet etc all without the ridiculous hype

  3. Bob76 says:

    An excellent resource on hype, booms, bubbles, and such is:
    Andrew Odlyzko: Recent Papers on Technology and Financial Manias


  4. The Cheshire cat is the character that keeps on giving.

    Karl Rohe also has a rule by its name

    “One day Alice came to a fork in the road and saw a Cheshire cat in a tree. ‘Which road do I take?’ she asked. ‘Where do you want to go?’ was his response. ‘I don’t know,’ Alice answered. ‘Then,’ said the cat, ‘it doesn’t matter.”

  5. Norbert says:

    Funk does not mention something that seems relevant; the decline in support for basic research. The breakthroughs in the 50s and 60s on lasers, transistors etc was based on fundamental research done in the 20s and 30s and refined in the 40s. However, nowadays, at least this is my impression, the way you fund basic research is to justify it in terms of its proximate practical payoffs. Imagine if someone had asked Godel and Turing and Church whose work is basic to all of current computer tech what the practical implications of their work would be. How could they know? They were interested in questions in the foundations of arithmetic and proof theory And could care less about its tech payoffs. This is not longer the case as the requirement in NSF grants asking for wider implications attests (note these wider implications are almost always read as practical payoffs). At any rate, funding of blue sky basic science feeds technology with a lag time of 50 years or so. We no longer are patient enough to do this, and this is a problem.

    • jonathan says:

      I somewhat disagree: those guys were working on questions that arose out of Cantor’s work, the growth of axiomatic reasoning, and the development of modern proof forms. They had jobs. And those jobs included tackling the questions which were relevant then. E.g., you cant separate Godel’s work from Tarski’s or von Neumann’s or others.

    • I disagree with this. NSF and others seem genuinely interested in basic research. By far the larger issues is that tech journalists, university tech transfer / communication offices, etc., seem insistent that basic research findings be spun as applied, contributing to hype.

      • This doesn’t seem to be the case at the NIH. They’re constantly asking what’s the “translational” impact of this biology research on clinical practice.

        I’m not so clear that pie in the sky “basic research” is what Norbert makes it out to be. We no longer have a wide open field of physics that is fundamental and unknown.

        Most of what we don’t understand and could have basic impacts today is unsung things: ecology, material science, electrical power grid innovation, fresh water, or already hyped things: medicines, drugs, cancer biology etc

        • @Daniel NIH is hard to think about — most of its institutes are, by design, focused on practical issues of health and particular diseases. NIGMS (general medical sciences) isn’t, and its fraction of the total budget is roughly the same as it was 20 years ago. ( Despite having funding from NIH, I don’t have a great sense of whether their fondness for basic research has changed.

          • I can tell you for example that my wife who is a top researcher in bone related biology and bone regeneration was recently told in comments on a jointly funded NSF/NIH call for proposals that *didn’t even get scored* that she should “Consider getting someone who is a bone expert on board”.

            I also know that while she works on a problem (non-union bone fractures) that affects something like 10 million people a year (new cases each year) in the US, she is constantly scraping by, while a hype-chaser at her university has something like 3 or 4 separate multi-million dollar grants to study ALS, a disease that affects on the order of 10,000 people in the US, not new cases per-year, but all cases total.

            None of seeing those kinds of things makes me think the NIH is doing a good job of allocating funds. It seems to be a heavily “in-group gets funding from friends” and “fund stuff that is dramatic and plays well in the NYT” type situation.

            I was told as a graduate student by the NSF grant board that although I had an excellent track record and was clearly capable, I didn’t have a well developed statement of how I was going to save the world by teaching the underprivileged children… and so I was being passed up by the NSF in order to fund someone whose advisors knew how to copy and paste pap about my mission in life being to ensure immigrant children would learn the mysteries of soil mechanics.

            In other words, NSF seems thoroughly corrupted to me.

            • I’ve been on a dozen or so NSF panels, and this doesn’t reflect my experiences at all. It is true that outreach **activities** are part of the weights, and one can argue that they shouldn’t be, but this (i) is evaluated in terms of the actual proposed activities and not flowery prose about saving the children, and (ii) is pretty clearly stated as being part of the evaluation criteria.

              • My advisors were aware of many graduate students who got funded through NSF, and *none* of them actually *participated* in any of the actual outreach stuff they’d put in their applications…

                Conclude from that what you will.

            • jim says:

              “in-group gets funding from friends”

              Five words capture the history of funding for everything. Science is no exception.


                “The NSF GRFP has struggled with an uneven distribution of the award to a select few graduate schools. In 2019 31% of the grants were awarded to students of only 10 elite academic universities, with 14% of them awarded to just the top three: Berkeley, MIT, and Stanford[4].”

                they tried to address this by making changes in 2016 but

                “However, in 2018 the number of awards received by the top 10 universities was greater than any since 2011, and an even greater number of undergraduates awarded the fellowship came from the top 30 schools[8]. “

      • gec says:

        Agreed, NSF really is interested in funding basic research (not so sure about NIH).

        I do think there’s value in asking about broader impacts, not only because many basic research questions *do* have broader immediate impacts (especially in health and education applications), but because it is useful for a reviewer to see how clearly the applicant has thought about their research. You don’t get broader impacts directly from the research, but via how the research informs our theories of how the world works. So a good broader impact statement illustrates that the applicant has thought about the theoretical implications in a way that lets them generalize beyond the specific work in their proposal.

        On the other hand, just like all of us, NSF has to “sell” itself to Congress, which requires giving Congressional reps ammunition they can use to justify funding to their constituents and the media. And because that justification happens in the same hype sphere as everything else, that is another function that broader impact statements serve.

        So it is not so much about justifying basic research to NSF, as much as it might be justifying it to the people that allow NSF to fund basic research.

  6. jim says:

    “Online [environmental apocalypse, racial disparity] -hyping articles are now driven by the same dynamics as fake news. ”

    Once the claim is made, even by an activist or journalist, it’s irrefutable truth. The evidence is passe and we’re on to the solution. And just as the evidence needn’t support the claim, the solution needn’t be relevant to the problem. Once the solution has been declared, the only question left is: is The Solution enough to stop the (hyped claim), or do we need more?

  7. Anonymous says:

    One issue has to be volume, as in the volume of work done by the greater number of people involved in every field in many more locations. That obviously feeds a PR need, which raises the issue of audience, because there is targeted PR and less targeted. PR is communication: who is being communicated to and how?

    I sometimes think of birds and cats. If you’ve ever dealt with birds, whether a flock of pigeons or chickens in a coop, you know they tend to compete for attention. They want the food and they make noise to get it! As the feeder, you differentiate the birds by how they react and look, particularly with types that tend to look so similar they actually are hard to tell apart. So I broadcast the food, but make allowance for the more timid and more aggressive by tossing some far and some near. So I’m managing a flock response that has edges and areas of behaviors that also acts with a general behavior (e.g., after the frantic burst, they settle down and feed quietly without crowding each other nearly as much). Now, my cat knows where the brush lives. So when I go there, he gets visibly excited. I tend to give him what he wants but I dont brush him to his heart’s content, but rather to a point where I’ve had enough and my estimate is he’s had enough so he will want more later. In bird, I could put out a big feeder and have it stocked, but then they become more dependent on that food source. I want them to come back like I want the cat to come back: a combination of what we both gain.

    So to me, much of the PR hype is birds rushing around looking for the food to come out or the cat wanting the brush. This is true all over social media: people who are fans track those who have been ‘touched’ by their idol liking or commenting on a post. Being noticed out of a crowd requires a crowd wanting notice. It also requires an idol, which is where I get really interested because of the play of idol and idolatry. Not entirely in a religious sense, but in the generalized sense that belief requires some form of ideal which chooses. In other words, a choice function. Which can achieve personification in the form of a pop star or a tenure committee. The invocation of choice requires a higher choosing ‘idol’ to which you attribute the power of choice.

    In one sense then, my take is that the past 30 to 40 years have seen the exposure of this basic behavior, common among animals and our relationships with them, in more abstract human fields. I tie a lot of this to both computing becoming personal and the explosion in the financial world, by which I specifically mean the chain by which relaxing the rules (notably in London) regarding money you could make inspired rapid advance in the derivativization (if that’s an acceptable word) of financial instruments using computers. These derivative products came almost out of nowhere, which meant they competed for attention because attention meant food, which becomes money, money, money.

    We’ve also lost sight of the essential flocking issue: people act like birds, so they tend to frantically hop around for the food, and this behavior repeats. In humans, over the last 40 years, job markets have changed vastly with people moving across international borders much, much more often, thus flocking into places and into fields which were relatively then more separate, smaller flocks. As the flocks form across borders both real and virtual, flocking behavior emerges. And that behavior requires a chooser, which means a combination of money and respect and position accorded by some chooser and accepted, more or less, by the flock.

    You can see an allure of power is to be the chooser, the idol who distributes the food. But part of that allure is creating a dependency. Or as I like to say, too bad I didnt screw up raising my oldest daughters because then they’d need me more and I’d see them more often! Help create independence and you reap independence.

    I also think the layers of abstraction in human activity expanded by the union of computing with practical possibility has created an appearance of importance which more mirrors popular culture. As in, can you remember the big songs of some prior year? Most fade completely until they reappear in a commercial and you go, ‘Yeah, I liked that song!’ But underneath you cant deny that music has been changing. For me, for example, while we may notice intellectually incomplete psychological and sociological work, the algorithmic revolution is growing so fast I’m thinking cubic. Medical imaging is leaping forward into diagnostics. (Reminder: my dad was a radiologist and I saw the development of the CT and MRI technology.) Another example is telematics in cars: dont people realize that telematic information will enable the flock to determine road conditions, thus reducing the apparent cost of making roads smarter? They will abstract the road. That work all continues while at the surface people are consumed with whether Uber is paying drivers enough. They will abstract the driver beyond the characteristics of staying safely on the road from place to place, so your ride will eventually appear to be polite enough to pull over past the puddle so you dont get wet.

    Most of life is just noise. That’s why I love your blog: you think and talk about the fundamental issues of rigor and truth in what is becoming arguably the space where we most need rigor and truth.

  8. Anoneuoid says:

    What happens is that a researcher or team of researchers makes a strong claim that is not well supported by evidence. But the claim gets published, perhaps in a prestigious journal such as the Journal of Theoretical Biology or PNAS or Lancet or the American Economic Review or Psychological Science. The authors may well be completely sincere in their belief that they’ve demonstrated something important, but their data don’t really back up their claim.

    This already describes 99%+ of what gets published. The standard amounts to “there is a difference between some groups -> therefore my favorite explanation for this difference is correct”.

    The percentage of papers that actually work out the premise of their model and deduce some predictions to check is very small nowadays.

  9. Eliot Johnson says:

    One problem that hasn’t been sufficiently acknowledged is the shallow scholarship and confirmation bias inherent in tech hype echo chambers. Self-proclaimed futurists are among the most culpable in this regard.

    Productivity growth as a function of tech innovation has a long history in economics, dating back to the Cobb-Douglas function first posited in the late 1920s, as well as Robert Solow’s later work in the mid-50s.

    From that more sober stream of work consider Nobel Laureate William Nordhaus’ thoughtful takedown of ‘economic singularities’ driven by outsized claims of productivity growth due to tech innovations as suggested by Brynjolfsson, among others. Here’s the abstract to his 2015 paper, “Are We Approaching an Economic Singularity? Information Technology and the Future of Economic Growth”:

    What are the prospects for long-run economic growth? The present study looks at a recently launched hypothesis, which I label Singularity. The idea here is that rapid growth in computation and artificial intelligence will cross some boundary or Singularity after which economic growth will accelerate sharply as an ever-accelerating pace of improvements cascade through the economy. The paper develops a growth model that features Singularity and presents several tests of whether we are rapidly approaching Singularity. The key question for Singularity is the substitutability between information and conventional inputs. The tests suggest that the Singularity is not near.

    Link to NBER post:

  10. jim says:

    wow after reading Funk’s piece I’m totally baffled.

    First, the fact that there are fewer drugs (or whatever) per hour of research time shouldn’t surprise anyone. Whenever a new field opens, I’d expect discoveries to accumulate slowly at first, accelerate, then decelerate.

    Second, for that reason it shouldn’t surprise anyone that “innovation” (assuming there is some accurate way to quantify such a thing) slows after a while.

    Third, while I agree that Uber is neither innovative or valuable, there’s no particular reason to expect a positive relationship between “innovation” and “productivity” (however accurately that’s defined) and share prices or stock values except over the longest of periods. Facebook is (or was) an innovation. But how does it improve productivity? Lots of people who are completely incompetent with technology own very expensive iPhones. I’m very competent with technology and I own a $50 phone from wal*mart. So the value of Apple’s shares doesn’t say anything about productivity nor should it.

    Fourth, following what’s above, that implies that what Funk calls a “rational economic analysis” is only rational by his standards. So on the one hand I agree that, eventually, companies that don’t make money will see their share values plummet back to earth, no on knows just yet which companies will win the race to the most profitable technologies.

    Fifth: Am I missing something? AFAIK, the reason more money flows into software than into medical devices is that it takes a decade to bring a medical device to market while it takes barely a few years to bring new software to market. Then it takes another decade to scale up production of the medical device to sell it by the millions, but barely a few months to do the same with software. So software scales 100x faster and costs much much less to create. So it’s not hard to see why no one is investing in medical devices. Same for drugs, and same times ten for houses, cars, and heavy industry stuff like railroad and construction equipment.

    Sixth: there really *are* ample reasons why “technology” – meaning software – is getting alot more investment than home builders, and there *are* good reasons for the prices of technology companies to be quite high. But OTOH that doesn’t mean they will stay that way or that such high values are justified over the long term. For a few companies, they will be. For most, they won’t. But again no one knows which will win.

    • jim says:

      One other thing I don’t get:

      Why do people think – as Funk seems to think – that stocks have some intrinsic or “fair” value, some amount that they’re “really” worth? There is no such value for any stock. The value of a stock fluctuates with demand in the market. Today it’s worth more than what it sells for to the people who are buying it and less than what it sells for to the people who are selling it.

      • The price is what it is of course, but there’s no question that the price of a thing can be “wrong” in the sense that buying and selling shortly after, or selling and buying shortly after can make you huge quantities of money if you happen to know when information will hit the market. That’s why we ban insider trading, because insiders know when the price is wrong.

        • jim says:

          “insiders know when the price is wrong.”

          People think that but it’s almost never true.

          There are situations where they know a specific bit of information will move the price – an accounting fraud announcement or the surprise loss of a major contract for example. But those situations are rare. The impact for most info releases isn’t predictable.

          • Carlos Ungil says:

            Interesting reading on the subject:

            “Overall they made money about 77 percent of the time that they traded on stolen press releases, making $4.1 million of profits on their winners but also losing about half a million on their losers. Good, but not perfect; knowing a company’s earnings in advance does not, it turns out, guarantee that you can make money.”

            • None of this negates my original point, which is that there can be a clear difference between the market price today and the market price that would obtain if all information were made manifest today. It doesn’t even have to be that insider trading is right, just that the information moves the price substantially. Since that *does* happen, the concept is clear.

              What I’m pushing back against hardest is the idea that the market price is always basically by definition the “right price”. This can only come even CLOSE to true in a situation where we have perfect cost-free symmetry of information among all market participants and perfect liquidity with zero market friction such as taxes and other transaction costs.

              • jim says:

                “What I’m pushing back against hardest is the idea that the market price is always basically by definition the “right price”. ”

                What *I’m* pushing back against is the idea that there is any such thing as a “right” price or a “fair” price or a “perfect” price. There isn’t.

                See that’s the rub. You’re saying that I’m saying that the market is perfect. I’m not saying that at all. I’m saying that the price on any given day is a very crude estimate of the company’s value. But, unfortunately, there isn’t anything better. It’s not perfect. Sometimes it’s not even good. But it’s the best available information.

              • > But it’s the best available information.

                I disagree. The fact that people who cheat can make insider trades and 77% of the time they make money proves that the market price is *NOT* the best available information.

              • jim says:

                “The fact that people who cheat can make insider trades and 77% of the time they make money proves that the market price is *NOT* the best available information.”

                The fact that there is a temporary information discrepancy between a handful of insiders and the rest of the market is irrelevant. That discrepancy lasts, what, days or a week? Compared to an investment horizon of years or decades? Over the long term the price movement on a single earnings announcement is trivial, even if cheaters can score on it.

              • Curious says:


                Another effect of this temporary mispricing is the amount of wealth that is lost by those on the other side of the transaction and how it’s consolidation among an increasingly smaller number of people.

              • Incorrect assessment of the long term value (months, a few years) of companies during the Dot Com boom had important real-world echos that are still going on.

                Incorrect assessment of the mortgage securities market was one of those echos and lasted several years before creating the worst financial crisis since the Great Depression.

                Incorrect assessment of the value of companies like Uber or the like are still going on today (maybe popping somewhat due to COVID)

                As Curious mentions the concentration of wealth into the hands of those with the scarce but “real” information is a major problem for society at the moment.

                I would say the history of the financial industry since about 1997 is the history of creating false impressions in the public market in order to pump and dump bad assets over and over again.

              • jim says:

                “Another effect of this temporary mispricing is the amount of wealth that is lost by those on the other side of the transaction and how it’s consolidation among an increasingly smaller number of people.”

                You’re claiming that the difference in pricing caused by the silence period before earnings is causing a massive transfer of wealth? That’s an absolutely preposterous claim!!!!! Ludicrous!!!!!

                I’d love to see your data on this supposed wealth transfer.

              • jim says:

                Daniel, your misconstruing price variations resulting from varying perceptions about the future with price variations resulting from “hidden information”.

                I don’t know why but there is a class of technically educated determinists who want the stock market to have some exact value at every second that reflects the unknown future. Or at the very least follows the much-loved random variation about the mean of the exact future.

                Most of the people who lose money in stock market busts are wealthy people. Just ask Krugman: he’ll tell you that a rising stock market doesn’t help little people. Why? Because they don’t invest. If that’s true, then how are they getting screwed by crashes?

                Here’s why some people are amassing huge wealth. It doesn’t have anything to do with false information in the stock market:

                Tech investors and founders are amassing huge wealth because tech companies are aggregating the earnings of what used to be hundreds and even thousands of businesses as they take over the entire world economy. And that’s happening as the world’s population has quadrupled in less than a century. On top of that increasing efficiency means that larger companies are making more with less. Today a product loaded into a container in central China can arrive at the dollar store in Little Rock without being touched by a single human the entire trip. Now all those people who would have touched that freight a hundred years ago are out jobs.

              • This is a very Bayesian vs Frequentist sort of argument, so it’s interesting, but maybe a little off topic. In a Bayesian context the perception of the probabilities for the future rests entirely on what you condition on. p(Future | Knowledge) but if you accept into Knowledge “facts” that aren’t true…you get *misperception*

                I’ll just say that widespread misperception about the possibilities for the future can drive mispricing that is absolutely obvious to knowledgeable people. Not all “perceptions” are created equal.

                The perception that Bernie Madoff had would have been very very different from the perception his clients had. However, one of those perceptions was *right* and one of them was *wrong*.

                In my opinion, the history of the last 20 years of the stock market is the history of people intentionally trying to deceive others about the future even if they have lots of relevant information that would let them KNOW with a relatively high degree of certainty that their company will never really achieve the sort of future expectations everyone has been led to believe.

                It’s the same history as the hyped up science articles so often derided here.

                It’s also the history of first selling those mis-expectations direct to retail (Dot Com) then wholesale (to mortgage securitization programs), then B2B (ie. get your company up and running and then sell it off to a bigger company that acts basically as a “tech hedge fund”, companies like Google, FB, Microsoft, and Apple)

            • jim says:

              “Interesting reading on the subject:”

              Cool article. It would be interesting to know how often they declined to trade on the info they acquired. The article doesn’t say if they always traded. They could have hacked many more but decided against trading on them. So they were 3/4 when they were certain the price would go up.

Leave a Reply