xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx

FREQUENTLY ASKED QUESTIONS

General

ERROR is a comprehensive program to systematically detect, report, and prevent errors in scientific publications modelled after bug bounty programs in the technology industry. For a one-pager on the project’s goals, see here.

౿ We expect that every piece of work contains errors, almost without exception. The presence of errors is not itself fatal to an article’s scientific contribution, but it is important to acknowledge such errors and correct the scientific record.

౿ We recognise that there is great heterogeneity between articles and fields, and therefore employ a very broad definition of an error, sometimes used in law, as an act that through ignorance, deficiency, or accident departs from or fails to achieve what should be done.

౿ We also recognise that some elements of an article may not have represented an error due at the time of writing, but may now be considered erroneous due to changing best practices. For example, using a Cross-Lagged Panel Model is not itself an error, but making specific claims based on a CLPM might be an error. The goal of
ERROR is not to debate whether such a choice was appropriate at the time of writing, nor is it to criticise authors for their choices based on information that was not available to them at the time. Rather, the goal is to maximally inform current and future readers of these influential articles as to the informativeness of the article’s claims, and to provide a forum for error reviewers and authors to discuss what readers should now believe on the basis of the work described in the article.

౿ The threshold of what constitutes an error is necessarily heterogeneous and qualitative. Sometimes errors involve the relationships between elements of an article rather than any individual part.

౿ Some things may be conditional errors based on perspective. This project does not seek to establish a consensus on such issues, but cases can still be defined conditionally. For example, “Not everyone believes that X is appropriate. For those that do not, Y conclusion would be considered an error. However, for those that believe X is appropriate, Y is a valid conclusion.”
Each of the following may represent a type of error, but it is difficult to say categorically, and impossible to evaluate severity absent of context:

  • Errors in code for data processing or analyses.
  • Mismatches between details reported in the manuscript vs. that implemented in the method or code.
  • Mismatches between statistical analyses and conclusions.
  • Citation accuracy could represent an error if it is central to article’s argument (Cobb et al. 2023).
  • Misapplication or misinterpretation of statistical analyses.
  • Issues of measurement.
  • Issues related to the Four Validities (internal, external, construct, and statistical).

Both are forms of peer-review, but they differ in terms of their scope and purpose. Pre-publication peer review necessarily makes decisions about originality and importance, whereas error checking does not. Articles selected for error checking have already had their importance established, both through the publication process and the community’s citation of that article over time. Error checking is generally a more technical process, which explicitly asks reviewers to inspect author’s research materials, such as their data and code. Error checking is also more audit-like in that the report is explicit about which elements of an article have been checked.

  1. The original Author(s) is contacted by a recommender and invited to agree to have their article checked for errors. The “Invited Papers” counter on the website is incremented.
  2. If they agree, the process continues and the recommender finds a Reviewer with appropriate expertise.
  3. The reviewer writes a report. The reviewer retains ownership of this report and agrees to licence it under a CC-By 4.0 licence and to allow it be hosted on error.reviews.
  4. The recommender reads the report and, if necessary, liaises with the reviewer about any necessary changes before the report is sent to the authors. For the sake of expediency of communicating findings to the scientific community, the final report is published immediately on error.reviews.
  5. The authors are invited to submit a response. This is also subject to changes by the recommender and a CC-By 4.0 licence and agreement to allow it to be hosted on error.reviews.
  6. Based on the reviewer’s report (as a whole) and the author’s response (as a whole, if they choose to submit one and do so in good time), the recommender writes their own short summary report which makes (1) a qualitative judgement about the presence of errors in the original article and (2) recommends an appropriate action to the authors if applicable. See the “How are errors quantified” heading. This report is also subject to a CC-By 4.0 licence and will be hosted on the error.reviews website, along with the reviewer’s report and author’s response.
  7. Should the reviewer and/or authors wish to continue discussion about issues raised during this process beyond the report and reply, they are encouraged to do so on PubPeer or other forums that foster useful scientific debate. In order to not entangle either party in length back and forths, ERROR will only be involved in the review’s report and the author’s reply.

The recommender’s role is broadly analogous to that of an editor during peer-review. They play a role in deciding which articles will be selected for error detection; solicit reviewers; can act as a mediator between the authors and the reviewer (e.g., in determining what constitutes a reasonable request); ensure that both parties engage in professional discourse; review both the reviewer’s report and the authors’ response; write the final summary report; and issue the final error categorisation and recommended action.

౿ Checks for individual (types of) errors are performed by the reviewer. The recommender uses the reviewer’s report (as a whole) and the author’s response (as a whole) to make (1) a qualitative judgement about the presence of errors in the original article and (2) recommends an appropriate action to the authors if applicable.

౿ Analogous to an acceptance decision during peer review, where editors ultimately make categorical decisions such as “accept for publication,” “major/minor revisions” or “reject,” the recommender can make a final decision using the following categories. Each decision is linked to the type of appropriate action that the recommender will recommend that the authors take. The recommender’s decision and recommendation will be published on the website alongside the reviewer’s report and the response by the author (if applicable). 

DECISIONRECOMMENDATION
No errorsNo additional action beyond publication of the error report and reply
Minor errorsNo additional action beyond publication of the error report and reply, plus recommendation that the authors appropriately recognise these errors in future discussions of the article.
Note that we expect most published research to fall into this category. 
IndeterminableNo determination could be made regarding the presence or absence of important potential errors. Recommendation that the authors appropriately recognise this lack of verifiability in future discussions of the article.
Note that this is considered a less desirable outcome than “Minor errors” because in that case the errors are verifiably minor in nature. 
Moderate errorsCorrection notice (minor).
Note that numerous ‘minor’ errors might constitute a moderate error in aggregate.
Major errorsCorrection notice (major) / may warrant an expression of concern.
Severe errorsRetraction

౿ In order to foster a culture of error checking and error acceptance, ERROR can also issue commendations, for example for authors whose article was found to contain no errors, or for authors acted upon the recommender’s recommendations in a timely manner. The decision to publish a correction notice or to retract a paper in light of demonstration of error in that work is not an easy one, but it demonstrates an admirable commitment to research integrity and the pursuit of truth (e.g., praise for Julia Strand’s efforts to correct an error in her own work, see also Julia Rohrer’s broader Loss of Confidence Project).

౿ Legitimate scientific scrutiny and critique are at the core of this project. While the reviewer and the authors may assume nominally adversarial roles during the course of the error checking process with regard to who is scrutinising vs. defending the article’s methods and claims, the error checking process is ultimately a collaborative one in the service of improving our collective scientific knowledge and fostering a culture of error checking and error acceptance. As in all other contexts, all parties should conduct themselves professionally (e.g., avoid unfair or personal criticism or attacks). This also extends to requiring participants to meaningfully engage in the error checking process (e.g., respond to reasonable requests). The recommender will work with both the reviewer and the author to ensure that each of their respective reports and responses are in line with this, and acceptance of both the report and response by the recommender is conditional on these professional standards being met.

We will almost certainly make errors too. Journal editors and peer reviewers make errors, and so too will recommenders and error-checker reviewers. If you check some aspect of this process and suspect you have found an error, please contact us and we will do our best to rectify it. Part of this project is to normalise error checking and error acceptance, and such a culture must be inherently reflexive. It’s turtles all the way down.