Skip to content

Update on IEEE’s refusal to issue corrections

This is Jessica. Below is an update from Steve Haroz on his previously shared attempt to get a correction to an IEEE published paper.

A week ago, I wrote about IEEE’s refusal to issue corrections for errors we made in our paper, “Skipping the Replication Crisis in Visualization: Threats to Study Validity and How to Address Them”.

Well, someone from IEEE contacted me, apologized, and allowed us to add a note to the beginning of the original paper and to IEEE Xplore to describe the errors. We sent it to them on Dec 17, and it should eventually appear here (not sure when, especially given the proximity to the holidays).

While I thank IEEE for resolving this issue in our paper, I still have some concerns:

1) Many modern publishers issue corrections by updating the article and explaining the change in the corrigendum notice (example). However, IEEE is using a somewhat simplistic approach of adding a cover sheet to the paper but not allowing us to update the actual content. So readers will have to keep the cover sheet in mind while reading, and they will probably have to jump back and forth to integrate that information correctly. The correctness issue was fixed, but a readability issue was introduced.

Therefore, I still encourage those who wish to read or cite the paper to use the updated OSF version, not the IEEE version. The OSF viewer has all past versions at the bottom, and my comments that point out the changes are viewable interactively.

2) I am worried that we are getting special treatment due to the attention from the previous post (thank you to everyone who shared and retweeted!). Whether you can make paper corrections should depend on the merit of the corrections, not the popularity of the blog where you complain about it. The person from IEEE mentioned that “IEEE is in the midst of re-examining its policy regarding corrections to conference publications”. But I don’t know if that will result in more than a superficial change in policy.

3) IEEE’s current policies still state that a conference paper needs “substantial additional technical material” (IEEE PSPB 8.1.7 F(2)) to be submittable to a journal. But that’s true of any new submission that builds off of previous work. If the authors believe that the conference paper is sufficiently complete, then the work is stuck in a format with a limited and possibly inconsistently applied corrections policy. This issue comes down to a basic unresolved question: should conference papers count as part of the peer-reviewed scholarly record? If conference papers are to be credible, then they need to have the same clear process for making corrections that journals do. However, if conference papers are considered the temporary state of in-progress work that typically cannot be corrected, then (1) conference papers, like preprints, should not count as prior publications for journal submissions, and (2) we should be very cautious about citing conference papers, which may contain known but uncorrectable errors. 

How about you?

I’m worried about the likely special treatment our paper has gotten and the lack of clear policies on corrections. If you try to make a necessary correction to your IEEE conference paper, please let me know how it goes.

Regarding Steve’s question under #3, “should conference papers count as part of the peer-reviewed scholarly record?” — our reliance on them in computer science is admittedly unusual. As a grad student, I was told we prioritize conferences because tech changes faster than many other fields so we can’t bother with the long review cycles that for instance, economists put up with. It may take a while to phase them out but my sense is that an increasing number of computer scientists are over this line of thinking. E.g., just today Moshe Vardi published a CACM article with some reasons to drop conferences


  1. Dale Lehman says:

    How quaint – should conference papers count as peer reviewed? How about only counting papers that are peer reviewed by competent reviewers? Or, how about not “counting” at all? Isn’t it time that we attempt to gauge the quality of the work and not the number of papers, citations, impact scores, etc.?

    • elin says:

      In CS that is what is used (having served on a campus tenure and promotion committee for a number of years). It’s just a very different culture, just like university books being most valued in history. They essentially say that they can’t have the time lag. Jessica’s update is interesting though.

  2. Andrew says:

    Conferences are great, but the whole reviewed-conference-paper thing . . . that’s a lot of work! A bunch of years ago I spoke at Nips and they sent me a bunch of conference papers to review, I gave quick reviews and then they got back to me and wanted more. Among other things, they wanted me to check the correctness of a proof. I was like, what? Are you kidding?

    But I do sometimes cite conference papers, because sometimes that’s the only form of a paper I’d like to cite. Then again, I’ll also cite Arxiv papers, and they’re not reviewed at all. So maybe I think we just have to back away from the idea of “the peer reviewed scholarly record.” But I guess people need peer-reviewed papers to get jobs. It’s a bit of an arms race.

    Regarding “tech changes faster than many other fields”: in medical journals review papers within a couple weeks, right?

    • On backing away from the peer reviewed scholarly record – that has always made intuitive sense to me. But then I immediately wonder how we can distinguish it from essentially pushing a reset button (for instance it sounds from the Vardi article like computer science conferences at least started without peer review as we know it now, but evolved to that). Also how do we make sure that people don’t rely on superficial heuristics even more in that world to fill the gaps, e.g., the most famous labs just absorb more of the attention and the less famous labs have even less of a chance than they do now. It’s why speculating about the future of science is hard I guess.

      On the “tech changes faster than many other fields” – I was just repeating what I was told! I’m not saying that’s true and like I said in my comment on what Steve wrote, I think the inefficiency of conferences is something more and more people are acknowledging.

      • Steve Haroz says:

        I’ve heard the “tech changes faster than many other fields” argument frequently too. What’s silly about it is that especially in visualization and HCI, most (all?) of the research could have been done 10 years ago. We’re the bottleneck, not the tech.

    • Adede says:

      I hope somebody is checking the correctness of proofs.

    • Steve Haroz says:

      Andrew, conference reviewing is problematic for other reasons too. The tight schedule is used as an excuse to prohibit reviewers from requesting data or analysis code.

    • Clyde Schechter says:

      “…medical journals review papers within a couple weeks,…”

      Not really. There are a few journals with large audiences that will fast-track a paper that is deemed to be of unusual urgency and importance, such as breaking key developments in the Covid-19 epidemic. But under normal circumstances review times are on the order of a few or several months. Not as bad as economics, but typically not just a few weeks.

  3. Renzo Alves says:

    I suspect editors are sending a message.

Leave a Reply