Skip to content

The meta-hype algorithm

[cat picture]

Kevin Lewis pointed me to this article:

There are several methods for building hype. The wealth of currently available public relations techniques usually forces the promoter to judge, a priori, what will likely be the best method. Meta-hype is a methodology that facilitates this decision by combining all identified hype algorithms pertinent for a particular promotion problem. Meta-hype generates a final press release that is at least as good as any of the other models considered for hyping the claim. The overarching aim of this work is to introduce meta-hype to analysts and practitioners. This work compares the performance of journal publication, preprints, blogs, twitter, Ted talks, NPR, and meta-hype to predict successful promotion. A nationwide database including 89,013 articles, tweets, and news stories. All algorithms were evaluated using the total publicity value (TPV) in a test sample that was not included in the training sample used to fit the prediction models. TPV for the models ranged between 0.693 and 0.720. Meta-hype was superior to all but one of the algorithms compared. An explanation of meta-hype steps is provided. Meta-hype is the first step in targeted hype, an analytic framework that yields double hyped promotion with fewer assumptions than the usual publicity methods. Different aspects of meta-hype depending on the context, its function within the targeted promotion framework, and the benefits of this methodology in the addiction to extreme claims are discussed.

I can’t seem to find the link right now, but you get the idea.

P.S. The above is a parody of the abstract of a recent article on “super learning” by Acion et al. I did not include a link because the parody was not supposed to be a criticism of the content of the paper in any way; I just thought some of the wording in the abstract was kinda funny. Indeed, I thought I’d disguised the abstract enough that no one would make the connection but I guess Google is more powerful than I’d realized.

But this discussion by Erin in the comment thread revealed that some people were taking this post in a way I had not intended. So I added this comment.

tl;dr: I’m not criticizing the content of the Acion et al. paper in any way, and the above post was not intended to be seen as such a criticism.


  1. Pepe Silvia says:

    Would love to see the link.

  2. jrc says:

    Finally giving us something useful Andrew! If I got me some double hyped promotions, I’d have tenure in the bag! I mean, it has the word promotion in it already!

    You know what DOESN’T help my tenure case? Referees returning papers because I “appear to have blatantly p-hacked the results” or “wandered so far down the garden of forking paths that I don’t even know where the research started” or “there is no connection between the theory presented and the statistical analyses actually conducted”. I blame you for that. This, though – this is actually useful for a professional research career.

  3. yjr says:

    This is a joke, right?

  4. Corey says:

    So on the one hand, yes, extreme learning, super learners, and optimal data analysis — all names chosen to promote instead of describe or inform. Boo!

    On the other hand, is this actually a bad method or a bad application of it?

    • Andrew says:


      The method could be excellent; I have no idea. I find the hype distasteful but it could be a useful method, and, hey, in that case the hype could be a good thing in that it helps to inform people about this method.

  5. Padang Itik says:

    The tough thing about hype/marketing/advertising is that it’s an arms race: you don’t just have to do it, you have to do more of it than everyone else. So every time someone pushes an ad budget higher, everyone else has to follow suit, and the stakes keep rising. The externality (I’m not an economist but I’ll pretend like I understand that word anyway) is that the public is more and more awash in spammy ads which take more and more of our attention. The best solution would be some kind of disarmament: everyone agrees to advertise a lot less and make that aspect of competition less important than building better products (a healthy form of competition that is not zero-sum). The other solution (that I’m trying to pursue personally) is to remove yourself from ad platforms: don’t watch TV, use facebook, etc.

    For scholarship, maybe there’s an analogous situation: hype is becoming part of the arms race of getting papers noticed, communicating breakthroughs, and (for the individual) getting tenure and other career perks. Wouldn’t it be great if we could have some kind of disarmament: maybe voluntarily agree on rules about how much a paper can be hyped or marketed, agree never to tweet about papers, only give TED talks on papers that are at least 20 years old and weren’t written by you, and so on. Or, (another thing I’m doing) read fewer new papers since so many of them are over-hyped junk.

  6. Erin says:

    I know some of the authors of this article and personally, I think it’s disrespectful to deride someone’s hard work and contributions to the field in this way. I understand that your post is meant to be a joke, but it’s possible to make jokes without taking a dig at something that these authors are (and should be) very proud of — a publication in PLoS One.

    In my opinion, the abstract itself is not hyped at all. If you have an issue with the term “Super Learner”, then you should address (or if you must — ridicule) that directly, rather than mocking practitioners for writing an abstract that simply mentions the name of the algorithm that they’re using. Your energy is misdirected and quite frankly, petty.

    The term “Super Learner” was coined in 2007 by Mark van der Laan from UC Berkeley (also my PhD advisor) — this article is not claiming to invent and hype a new machine learning method. Despite the implication, the Super Learner algorithm is not a product of the current Machine Learning / Big Data hype craze (that annoys all of us in this field equally). If you want real hype, search the latest deep learning papers on arxiv (e.g. “DeepNet”, “DeepMath”, “DeepGaze”). That aside, a machine learning algorithm that out-performs nearly all other algorithms… well, that requires a good name and so I think it’s worthy of the name Super Learner (personal opinion, of course).

    I enjoy your blog because you offer a measured, critical voice in this community on a variety of topics. I hope that you can hold your future posts to a higher standard than this one.


    • Andrew says:


      The point of the post was not to deride the work in that paper. What happened was that someone pointed me to it and I thought it was pretty funny, so I whipped off the parody. It was not meant in any way as a criticism of the content of the paper. I was actually at a conference not long ago where Mark van der Laan presented this work, and I thought it was interesting work. People get to title their own methods, and if Mark and their colleagues want to call it a super learner, that’s their call!

      The general topic of comparing prediction models is important, and I recognize that too. For example, my colleagues and I just finished a paper on using stacking to average Bayesian predictive distributions. We didn’t happen to call it Super Stacking or BayesStack or whatever, but we could’ve, I guess. And, if we had given our method (ok, not really much of a new method, more of an adaptation or tweak of an old idea for point estimation, that we adapted to the combination of posterior distributions) such a name, that would not make the method any worse.

      I think one could do a similar sort of parody of the abstract of just about any scientific paper. The point is to caricature some aspect of it. For example, I could take this paper of mine and focus on not on hype but on b.s.:

      Decisions in what to publish are often justified, criticized, or avoided using concepts of jargon and buzzwords. We argue that the words “jargon” and “buzzwords” in publishing discourse are used in a mostly unhelpful way, and we propose to replace each of them with broader collections of b.s. to make up a collection of meaningless adjectives that we think will be helpful in taking an empty discussion of statistical foundations and getting it published in practice. The advantage of these reformulations is that the replacement terms bamboozle the reader . . .

      Anyway, believe it or not, I thought by revised version of the super learner abstract was disguised enough that I was not expecting anyone to connect it with the original! I was not really expecting anyone to catch the original source, and it was not my intention to criticize that paper. If I wanted to criticize that abstract, really the only thing I’d take issue with is the claim that their framework yields “inference with fewer assumptions than the usual parametric methods.” Such claims are tricky because assumptions can be measured in different ways. More accurate predictions in particular classes of examples, sure, I can buy that, but “fewer assumptions” . . . that I’m not so sure about. This comes up in Bayesian analysis too. For example, compare maximum likelihood to a Bayesian inference with informative prior. Does the Bayesian inference make more assumptions (because that prior is an assumption), or does the maximum likelihood estimate make more assumptions (because it is implicitly assuming the uniform prior from -infinity to infinity, which is a very strong prior assumption that the effect size is huge)? Does linear regression make the strong assumption of linearity and additivity or is it just “regularized least squares,” a set of data operations making no assumptions at all? Etc. I’ve wrestled with the double-robustness issue a long time—it came up in this paper from 1990; see top of page 1153 of that paper. We did not attempt to lay out the concept in general and I claim no priority here; I think it was well known that a regression analysis for causal inference is doubly robust in the sense that the estimates are valid if the regression model is correct or if the treatment was assigned at random. The later work on this topic by Mark and others developed this idea in much more general concepts.

      That was all a digression. Again, possible hype aside, the original paper by Mark and others stands on its own, and if I thought some of the language in the abstract was funny, that’s just my take on the presentation, not at all intended as a diss on the content. I’ll add a P.S. to the post to clarify.

      • Corey says:

        For the record, I found it by doing a Google search for the quoted phrase “methodology that facilitates this decision by combining all identified”, which pulls it up as the first hit. My Google-fu was strong that day.

      • Erin says:

        Thanks for your reply. I understand that you didn’t mean for the source of the abstract to be revealed, but it was (it wasn’t too hard to google a few of the words, as Corey mentioned). I have empathy for the authors who instead of enjoying their achievement of publishing in PLoS One, are now thinking about how they are the subject of mockery by a prominent member of the community. That’s why I responded here, since it may not have been obvious to you that your blog post created bad feelings for the researchers involved. Making a joke of people’s research is best kept to private email with your colleagues.

        • Andrew says:


          I think it’s ok to joke about published papers but I regret not making it more clear originally that I was just commenting on the wording, not on the content. I naively thought the original paper would be unfindable, and had I thought it through, I would’ve added a P.S. right from the start to clarify that I had no problems with the paper’s content.

          Thanks for commenting here. The comments are a key part of the blog, and I much prefer if people tell me right here when I mess up!

        • jrc says:

          “Making a joke of people’s research is best kept to private email with your colleagues.”

          No – that is how Science degenerates. Science works best as open discourse. Satire, and even mockery, are an important part of open discourse.

          Open mockery of individuals (as opposed to their work or writing) is unnecessary, counter-productive, and is not part of open scientific discourse (and if it must be done, I agree that should be kept to private exchanges). But the potential for hurt feelings should not determine whether or not someone publicly criticizes someone else’s work (which didn’t even happen here – he publicly mocked their writing, not even their work, and only as part of a broader critique of the scientific-industrial-hype-complex). If that means some authors are “enjoying their achievement of publishing” less than they would be without criticism showing up on the internet…well, that’s the cost of doing science and publicizing your work. As academics, we spend and get paid huge amounts of mostly tax-payer money to do something valuable to society (generate knowledge) – our personal feelings of pride should come from contributing as best we can to that process, not from the criticism-free adulation of our colleagues. (Note: I do not attribute any hurt feelings to the authors of the work that was the object of Andrew’s satire, just to Erin’s interpretation of why she felt the need to comment here).

          Not only does it impede scientific discussion and progress to limit critical comments that might hurt someone’s feelings, it also reifies the notion that criticism of work, or the state of scientific research, or some specific statistical analyses is actually a criticism of the Author as a person. And that is not the case at all – we criticize work in order to push the field forward, not because we are fighting for some personal battle with the authors or are interested in morally judging their character.

          • Andrew says:


            +1 to your comment.

            And -1 to me on my original post: by not making it clear that I was mocking style rather than content, I muddied my own message. So +1 to Erin for giving me the chance to clarify.

    • Corey says:

      a machine learning algorithm that out-performs nearly all other algorithms… well, that requires a good name and so I think it’s worthy of the name Super Learner

      The end effect of everyone acting this way is a proliferation of hype-sounding jargon that does little to actually inform people of what they’re looking at. The very-similar-in-spirit weighted majority algorithm has a name that at least gives you a clue as to what you’ll find under the hood.

      • Erin says:

        I don’t like hype-sounding names either. I felt that the name was a bit unusual when I first heard it, but I’ve been using the name for 10 years so I am over it at this point. I suppose it could have been called something like: Generalized Loss Based Method for Ensemble learning via metalearning and Cross-Validation with Oracle Results. But that’s a mouthful. I don’t speak for Mark, but I don’t think he has much concern for people criticizing the name he came up with. If someone produces a new method that shows better results than Super Learner… well, that might get his attention.

  7. Erin says:


    > > “Making a joke of people’s research is best kept to private email with your colleagues.”
    > No – that is how Science degenerates. Science works best as open discourse. Satire, and even mockery, are an important part of open discourse.

    Regarding my comment — of course I don’t think it’s a good idea to make jokes of people’s research privately. The intent of my comment is to say that if one feels the desire or need to make a joke at the expense of your peers, then to please do it privately. I am by no means promoting private mockery of people’s research (as should be quite clear by the context of my other comments).

    As a scientist myself, I understand and value open discourse and critique of people’s work. There is a difference between thoughtless mockery and valuable scientific or cultural criticisms. If you or Andrew have a real concern about our field being hijacked by the “scientific-industrial-hype-complex”, then let’s have a discussion about it. Your view that mockery is an important part of open discourse in science is just one person’s opinion (yours), and I disagree with this viewpoint. There is always a way to have a critical conversation without mockery… it just requires respect from both parties involved.

    • Andrew says:


      You write, “If you or Andrew have a real concern about our field being hijacked by the ‘scientific-industrial-hype-complex’, then let’s have a discussion about it.” I do have such concerns, and I’ve written about it a lot! For example, here and here and here, to give three different examples. Again, I regret that in my post above I was not at first clear that I was not disparaging the content of the Acion et al. article.

    • jrc says:

      I didn’t take it to mean you thought researchers should joke about colleagues (in private or anyone), I took it to mean you considered joking about colleagues to be unprofessional and unscientific. In the case of attacks on researchers as people, I agree with you. In the case of attacks on research that happens to have been written by someone (so all research), then I disagree with you.

      And of course, I agree with you that my opinion on the bounds of acceptable criticism in scientific discourse is just my opinion – I don’t think you have to agree with me, I was just making an argument. That argument distinguishes between satire of scientific practice (potentially as embodied in a particular work) and mockery of individual scientists: I think the former can be productive, and the latter should not be part of the scientific discussion.

      As for having a discussion about the “scientific-industrial-hype-complex” – I’d say that a non-negligible percentage of the discussion on this blog is precisely about that. Some of that proceeds with discussions of the potential negative effects on the discipline of over-hyping results; some of it pertains to how researchers can balance the need to promote their work with the requirements of suitable scientific humility and the embrace of variation; and some of it involves satire and mockery – but it is almost always satire and mockery of a piece of work, not a person. You don’t have to like that opinion or like how we discourse on the discipline, but I think there is a big difference between mocking a paper and mocking a person. And I think many of us try very hard to stick to the work and avoid attacking the person.

Leave a Reply