Search intent and safe service scope
Who is this guide for? This page is written for users searching for How to Write the Results Section: Statistical Reporting Standards That Satisfy Reviewers who need a clear, trustworthy and practical explanation rather than a generic sales message. It clarifies what can be supported ethically, which files are useful, and how to move from uncertainty to a defined consulting brief.
The results section is where data speaks — no interpretation, no hypothesis discussion, no literature comparison. Just the objective presentation of what your study actually found. The most common struggle for early-career researchers is losing the boundary between "showing data" and "slipping into discussion." This guide lays out the principles I've crystallized over years, so your results section stands up to reviewer scrutiny.
Ordering Results: Follow Your Hypotheses
The strongest principle: report findings in the same order as your hypotheses or research questions in the introduction. When a reviewer reads three hypotheses in the introduction and encounters them in a different order in the results, cognitive load increases. Consistent ordering keeps the reviewer oriented.
Standard flow for clinical and social science papers: participant characteristics (sample demographics) → descriptive statistics → inferential tests (hypothesis testing, regression) → exploratory/additional analyses (if any).
Reporting Descriptive Academy Correctly
For continuous variables: M = 24.6, SD = 3.2 (APA format). If data are non-normally distributed, use median and interquartile range: "Mdn = 22, IQR: 18–28." For categorical variables, always provide both n and %: "87 participants (52.4%) were female."
Reporting p-Values Correctly
This is the single most common error source. Modern journals (APA 7, NEJM, Lancet) require exact p-values, not threshold notation. Correct: p = .032 or p = .003. When p < .001, write "p < .001."
| Incorrect | Correct |
|---|---|
| p<0.05 | p = .032 |
| p=NS | p = .184 |
| p=0.0001 | p < .001 |
| p>0.05 | p = .417 |
Effect Sizes: The Modern Must-Have
Virtually all serious journals now consider p-values alone insufficient. Effect sizes must accompany every statistical test:
- t-test: Cohen's d (0.2 small, 0.5 medium, 0.8 large)
- ANOVA: η² or partial η²
- Correlation: r is already an effect size (0.10 small, 0.30 medium, 0.50 large)
- Chi-square: Cramér's V or phi coefficient
- Regression: R² and standardized β
- Logistic regression: Odds ratio (OR) with 95% CI
Confidence Intervals
Beyond p and effect size, 95% confidence intervals are now standard. Example: "A statistically significant difference was found (M_control = 22.4, SD = 3.1; M_intervention = 28.7, SD = 4.2; t(148) = 5.83, p < .001, Cohen's d = 0.95, 95% CI [4.18, 8.42])." This single sentence answers every statistical question a reviewer might ask.
Tables: What Belongs There?
General rule: if you need more than three numbers in a row, use a table. Demographics, multiple group comparisons, and regression coefficients are ideal table material. Single numbers or simple two-group comparisons belong in the text. Table titles should be descriptive: not "Table 1. Demographics" but "Table 1. Demographic and clinical characteristics of study groups (N = 240)."
Figures: When and Which Type?
Figures excel at showing trends or patterns. Use line charts for change over time, bar charts or boxplots for group comparisons, and scatter plots for relationships between continuous variables. Avoid pie charts in academic manuscripts — bar charts are almost always clearer. Every figure needs a comprehensive caption that makes it understandable without reading the main text.
Common Mistakes in Results Writing
- Duplicating table data in text: Don't rewrite every number. Text should highlight key findings and point to tables: "As shown in Table 2, the intervention group scored significantly lower."
- Interpreting results: "This confirms our hypothesis" belongs in the discussion, not here.
- Reporting "trends": For p = .07, "approaching significance" is no longer accepted. If it's not significant, say so.
- Hiding negative results: Non-significant findings must be reported. Reviewers notice missing analyses.
- Inconsistent decimals: Use the same number of decimal places throughout (typically 2, or 3 for p-values).
Boss Academy Results Section Support
For results sections written in correct statistical reporting format, journal-compliant, and reviewer-proof, Boss Academy provides support — from SPSS analysis to APA 7 reporting, from table preparation to journal formatting.
Reliability, ethical boundaries and quality control
For How to Write the Results Section: Statistical Reporting Standards That Satisfy Reviewers, the quality criterion is not keyword density; it is whether the reader can make a safer, better-informed decision. Boss Academy keeps academic ownership with the researcher and focuses on transparent consulting, methodological clarity and deliverables that can be explained during supervisor, jury or reviewer evaluation.
- Research questions, statistical choices, tables and interpretation are checked for internal consistency.
- Personal or clinical data should be anonymized before sharing; only necessary files should be uploaded.
- The final output should be usable as a roadmap, revision plan, analysis report, formatted document or publication-ready support file.