Skip to content

Get your research project reviewed by The Red Team: this seems like a good idea!

Ruben Arslan writes:

A colleague recently asked me to be a neutral arbiter on his Red Team challenge. He picked me because I was skeptical of his research plans at a conference and because I recently put out a bug bounty program for my blog, preprints, and publications (where people get paid if they find programming errors in my scientific code).

I’m writing to you of course because I’m hoping you’ll find the challenge interesting enough to share with your readers, so that we can recruit some of the critical voices from your commentariat. Unfortunately, it’s time-sensitive (they are recruiting until May 14th) and I know you have a long backlog on the blog.

OK, OK, I’ll post it now . . .

Arslan continues:

The Red Team approach is a bit different to my bounty program. Their challenge recruits five people who are given a $200 stipend to examine data, code, and manuscript. Each critical error they find yields a donation to charity, but it’s restricted to about a month of investigation. I have to arbitrate what is and isn’t critical (we set out some guidelines beforehand).

I [Arslan] am very curious to see how this goes. I have had only small submissions to my bug bounty program, but I have not put out many highly visible publications since starting the program and I don’t pay a stipend for people to take a look. Maybe the Red Team approach yields a more focused effort. In addition, he will know how many have actually looked, whereas I probably only hear from people who find errors.

My own interest in this comes from my work as a reviewer and supervisor, where I often find errors, especially if people share their data cleaning scripts and not just their modelling scripts, but also from my own work. When I write software, I have some best practices to rely on and still make tons of mistakes. I’m trying to import these best practices to my scientific code. I’ve especially tried to come up with ways to improve after I recently corrected a published paper twice after someone found coding errors during a reanalysis (I might send you that debate too since you blogged the paper, it was about menstrual cycles and is part of the aftermath of dealing with the problems you wrote about so often).

Here’s some text from the blog post introducing the challenge:

We are looking for five individuals to join “The Red Team”. Unlike traditional peer review, this Red Team will receive financial incentives to identify problems. Each Red Team member will receive a $200 stipend to find problems, including (but not limited to) errors in the experimental design, materials, code, analyses, logic, and writing. In addition to these stipends, we will donate $100 to a GoodWell top ranked charity (maximum total donations: $2,000) for every new “critical problem” detected by a Red Team member. Defining a “critical problem” is subjective, but a neutral arbiter—Ruben Arslan—will make these decisions transparently. At the end of the challenge, we will release: (1) the names of the Red Team members (if they wish to be identified), (2) a summary of the Red Team’s feedback, (3) how much each Red Team member raised for charity, and (4) the authors’ responses to the Red Team’s feedback.

Daniël has also written a commentary about the importance of recruiting good critics, especially now for fast-track pandemic research (although I still think Anne Scheels blog post on our 100% CI blog made the point even clearer).

OK, go for it! Seems a lot better than traditional peer review, the incentives are better aligned, etc. Too bad Perspectives on Psychological Science didn’t decide to do this when they were spreading lies about people.

This “red team” thing could be the wave of the future. For one thing, it seems scalable. Here are some potential objections, along with refutations to these objections:

– You need to find five people who will review your paper—but for most topics that are interesting enough to publish on in the first place, you should be able to find five such people. If not, your project must be pretty damn narrow.

– You need to find up to $3000 to pay your red team members and make possible charitable donations. $3000 is a lot, not everyone has $3000. But I think the approach would also work with smaller payments. Also, journal refereeing isn’t free! 3 referee reports, the time of an editor and an associate editor . . . put it all together, and the equivalent cost could be well over $1000. For projects that are grant funded, the red team budget could be incorporated into the funding plan. And for unfunded projects, you could find people like Alexey Guzey or Ulrich Schimmack who might “red team” your paper for free—if you’re lucky!


  1. Ben says:

    This does seem like a good idea.

    1. The charity angle is weird. If this is grant money, maybe it already was charity money lol.

    2. I’m undecided between reward payout vs. just paying someone a fixed fee. $3000 -> 15 hrs @ $200/hr. That is a *lot* of reviewing from a highly qualified person. If the point is to get more eyes to take quick looks, then $200/15 people @ $100/hr.

    3. “$3000 is a lot, not everyone has $3000” when I was a graduate student (a whole year ago), travel/conference money flowed like water. Travel/hotels stuff aren’t free, and ostensibly conferences fill similar purposes to this. I’m pretty sure there are a lot of groups that could afford this.

    4. Don’t journals charge money? This seems like it’s just letting people who write papers pick their reviewers. I do like the idea of skipping the journal (just put it on Arxiv and append the reviews at the end), but it doesn’t seem like this fixes any of the problems with peer review.

    5. Treating reviews like a black box seems strange. Surely the author has some idea where the problems are?

    • Andrew says:


      Sure, this Red Team system is not perfect. But the part that I like is the alignment of roles and incentives. Red Team members have the role of finding problems, and the authors want problems to be found. Also, I like the idea of people being paid.

      • jim says:

        “I like the idea of people being paid.”

        Me too! But it might be useful to find a way to score the people who get paid so authors could select the most conscientious reviewers.

        Possible approach: first categorize review issues along a range from suggestions to identified errors; and the types of suggestions and errors. Reviewers could tally up the number of each; authors could also do the tally and create an agreement comparison. I guess the idea being that the authors, being interested in uncovering errors, could request the reviewers who have provided the most useful reviews in the past. Perhaps there could even be market for reviews, such that authors are willing to pay the best reviewers more.

  2. BenC says:

    Interesting idea and could be really useful, but I am concerned about the idea of paying when combined with the low rate. Once you bring money into the equation, people start making comparisons to what they could earn from other activities. I am guessing that for many academics we would want on such a Red Team, the total $200 stipend compares rather unfavorably to their hourly consulting rate (talent like Cass Sunstein does not come cheap!).

    If this is the case, such people who might be inclined to be part of such a Red Team in the absence of a financial incentive (e.g., because they are evaluating it in terms being part of their usual job/service like refereeing papers), might now in the presence of this comparatively low financial incentive not join the team because they are now evaluating it relative to consulting, etc.

    As an aside, does anybody know the etymology of the term Red Team? Is it a US military holdover from the Cold War?

  3. Matt Skaggs says:

    Earnest academicians should avoid the use of the term “red team.” A red team is inherently a false flag operation.

    The process begins when your research group objectively gains an understanding of a vexing problem, but the understanding conflicts with what the folks in power were hoping for.

    The folks in power need some rationale for ignoring the research finding that they did not like. But the technical expertise resides in the research group, what to do?

    So they go out and find a few loyal company men, often already retired, who together can amass more technical authority than the research group. This “red team” (should be gray team in my experience) then reviews the work of the technical team, and surprise!, finds it wanting. Management at this point has no choice but to go with the findings of the group with more authority…the red team.

    Envision the current administration announcing that what Birx and Fauci are saying is controversial, and that the American public would benefit from a broader perspective. They then bring in a group of loyal followers with some expertise in epidemiology – could be quite fringy if there are enough of them – and those “experts” produce a consensus that conflicts with Birx and Fauci. That would be a classic red team operation.

    I would imagine that academicians would not want to taint an honest effort to find mistakes by associating it with this sort of thing.

Leave a Reply