🎓 PhD Students: 15% off — use code PHDFIRST  ·  📧 Free assessment: hello@meritpeer.com
Cluster 3: Statistical Methods

Common Statistical Errors
That Lead to Journal Rejection

Statistical errors are the single most common technical cause of peer reviewer rejection across clinical and quantitative research disciplines. Here are the ones that appear most frequently — and how to fix them.

Cluster 3: Statistical Methods 9 min read · MeritPeer Editorial Team

In a systematic analysis of rejection reasons across 500 manuscripts, MeritPeer's QA team found that 67% of manuscripts rejected for technical reasons contained at least one critical statistical error. The good news: every single error category is entirely preventable with appropriate pre-submission review.

Incorrect Statistical Test Selection

The most fundamental statistical error is using the wrong test for your study design and data type. Applying a parametric test (t-test, ANOVA) to non-normally distributed data; using independent samples tests when samples are paired; applying chi-square to cells with expected counts below 5; or using Pearson correlation when the relationship is non-linear. Each of these generates invalid results that any statistician reviewer will immediately identify. The fix: state your test selection rationale explicitly in the methods, and justify it against your data distribution.

Missing or Incorrect Effect Sizes

p-values alone tell you whether an effect is statistically significant — they tell you nothing about how large or meaningful that effect is. Modern journal guidelines (Nature, NEJM, Lancet, and most Elsevier journals) require effect sizes (Cohen's d, OR, RR, eta-squared, etc.) alongside p-values. Manuscripts missing effect sizes are routinely sent back for major revision. Include effect sizes and their confidence intervals for every primary and secondary outcome.

CONSORT, STROBE, and PRISMA Non-Compliance

Randomised controlled trials must follow CONSORT reporting guidelines. Observational studies must follow STROBE. Systematic reviews and meta-analyses must follow PRISMA. Non-compliance with these international reporting standards is an automatic major revision or rejection trigger at virtually every clinical journal. MeritPeer's Statistical Review service includes compliance checking against all relevant reporting guidelines as a core component of every review.

Underpowered Subgroup Analyses

One of the most persistent statistical errors in clinical manuscripts is presenting multiple subgroup analyses without appropriate power calculations, multiple testing corrections, or pre-specification in the protocol. Reviewers — particularly at Lancet, NEJM, and JAMA — are specifically trained to identify post-hoc subgroup fishing. If your subgroup analyses were pre-specified, say so explicitly with protocol registration number. If they were exploratory, label them as such and apply Bonferroni or FDR corrections.

About the Author
Dr. Qasim Al-Rashid

Director of Quality Assurance at MeritPeer. PhD in Statistics. Expert in CONSORT, STROBE, and PRISMA compliance.

Strengthen Your Manuscript Before Submission

MeritPeer's PhD-level expert reviewers provide the same calibre of feedback described in this article — structured, actionable, and journal-calibrated. Free quote in 24 hours.

Submit Manuscript for Free Quote →