What do reviewers notice that authors miss? During my recent experience as an external reviewer for two different international publishers: -Elsevier -Virtus Interpress I reviewed two academic papers published in: -Journal of Accounting Education -Journal of Governance and Regulation Despite the differences in journals and contexts, the review comments revolved around almost the same core themes, Points I am sharing here with any researcher aiming to publish in a high-ranked journal: 1️⃣ The title is not a marketing façade The title must accurately reflect the core substance of the research. In one of the papers, the work was rich and important, yet the title was misaligned with the content, this is a fundamental concern for any reviewer, regardless of the paper’s overall quality. 2️⃣ The research gap… or nothing Without a clear research gap, there is no real scientific contribution. A gap is not a rephrasing of what already exists, but a logical justification of what has not yet been addressed. 3️⃣ Methodology is not a formal procedure Methodology must be: -Aligned with the research question -Justifiable -Replicable 4️⃣ Statistical analysis: quality over quantity The issue is not “how many statistical tests were used,” but rather: -Is the analysis robust? -Has it undergone sensitivity testing? In my review, I focused particularly on: Sensitivity analysis The quality of results, not merely their statistical significance This aligns with what I teach and deliver in Systematic Review & Meta-Analysis courses, where we use critical appraisal tools such as JBI to assess methodological quality not just form. 5️⃣ References are not academic decoration Do not include references that are not actually used in the paper. A reviewer immediately notices an inflated reference list with no analytical function. 6️⃣ Artificial intelligence: an enhancement tool, not a substitute for the researcher I explicitly stated to the publisher that AI was used only to improve academic writing quality. AI can: -Improve phrasing -Enhance clarity But it is not the author, nor the source of ideas or methodology. Conclusion Peer review is not about fault-finding; it is a test of the quality of research thinking, from the title to the final reference. If you are a researcher, always ask yourself: Would my paper convince an editor… before it convinces me?
Common Issues in Engineering Paper Peer Review
Explore top LinkedIn content from expert professionals.
Summary
Peer review is a critical step in publishing engineering papers, where experts evaluate submitted research for clarity, originality, and scientific accuracy. Common issues in engineering paper peer review include unclear titles, methodological weaknesses, biased reviewer behavior, and problems with reproducibility, all of which can hinder the progress of scientific discovery.
- Clarify your focus: Make sure your paper’s title and content accurately reflect your research question, methods, and findings so readers and reviewers understand its significance.
- Avoid reviewer bias: Declare conflicts of interest and advocate for transparency in the review process to prevent unfair rejections or favoritism.
- Share your work: Provide access to your data, code, and methods so others can verify your results and build on your research confidently.
-
-
Is anonymous peer review ensuring quality or enabling revenge? - Reviewer 2 killed my paper with one sentence. "This work lacks novelty and rigour." Six months of research dismissed in 12 words. No explanation. No constructive feedback. Just rejection. Here's the twist: I discovered later that Reviewer 2 was working on identical research. My rejection became their publication six months later. This taught me something ugly about peer review: The system meant to protect science is broken. I've now reviewed 67 papers myself. I've seen things that would shock you: Reviewers demanding citations to their own work. Three-month delays for reviews that take 4 hours to complete. Senior researchers ignoring obvious conflicts of interest. Competitors rejecting breakthrough work to protect their turf. Reviews written to humiliate rather than improve. The worst part? Everyone knows this happens. Nobody talks about it publicly. We pretend peer review is sacred and objective. But it's become a gatekeeping tool for academic elites. Anonymous review was meant to ensure honesty. Instead, it enables: - Competitor suppression - Bias without consequences - Revenge without accountability - Career advancement through others' rejection I'm not saying we abandon peer review. But we need radical transparency: Optional open peer review. Consequences for abusive reviews. Conflicts of interest declared openly. Review timelines with accountability. The current system is failing early career researchers. It's failing science itself. Junior researchers with breakthrough ideas get rejected by threatened seniors. Innovation gets suppressed to protect established paradigms. We can do better. But first, we need to admit the problem exists. Stop pretending peer review is perfect. Start demanding accountability. What's your worst peer review experience? Have you ever suspected a conflict of interest in your rejection? #peerreview #academia #research #academicpublishing
-
I've peer reviewed multiple machine learning for chemistry papers lately and been 'reviewer 2' for the same few reasons. In order to do better science faster, please do these before submission so I don't ask for them in review: 1. Make your code (and data) available, and preferably open source, whenever possible. If the core advancement in your paper is the result of code being executed, I need to be able to inspect that code and run it myself - and once published, your readers will want to do the same! 2. Engage with baselines. Packages like MolPipeline make this a matter of only FOUR lines of code (see attached example). Including proper baselines is as important as a thorough literature review, as it puts your work in context. 3. Statistical tests. Training machine learning models is an experiment, in that it is subject to random error. You should not run a single train/test split and claim differences between models - instead, read "Practically Significant Method Comparison Protocols for Machine Learning in Small Molecule Drug Discovery" and use their starter code to rigorously validate your conclusions. 4. Proofread. As the pace of publishing increases, we're inclinded to push things through faster. Typos are totally fine, but incorrect equations and missing words take me out of your paper when I'm reading. Spend an extra hour proofreading your paper, and save yourself an extra round of peer review. This applies to citations too - make sure authors names are properly capitalized, DOI or URL is included, and so on. Citations managers do a good job here, but you still need to proofread!
-
Why do top journals reject good papers? After a series of recent rejections, I've reading through editorials - to refresh myself on rejection - but also - to remind myself about how to avoid making mistakes. One editorial resonated - reminding me - that not all rejections are because the papers are bad - but bc of challenges in the broader academic ecosystem. In an MIS Quarterly editorial, Straub (2008) identifies seven recurring issues in the journal review process: 1. Good papers are rejected more often than weak papers are published, despite rejection doing greater harm to a journal’s mission. 2. High rejection rates are mistakenly treated as indicators of journal quality, even though they are not correlated with journal rankings. 3. Methodological rigor is overvalued relative to intellectual contribution, leading reviewers to discount novel or field-shaping ideas. 4. Reviewer consensus is often deferred to too heavily, even when editors recognize the potential importance of a paper. 5. Editors underuse their judgment & authority, functioning as vote counters rather than intellectual stewards. 6. Risk aversion dominates editorial decision-making, favoring incremental work over innovative research. 7. Journal culture prioritizes rejection over discovery, rather than actively seeking & developing the best work in the field. So what to do? And I take some liberties here. How journals can reduce the rejection of good papers 1. Reframe the purpose of peer review - Treat review as a developmental process aimed at improving promising work, not filtering out imperfection. 2. Decouple journal quality from rejection rates - Evaluate journals by the impact & influence of published work, not by how many submissions are declined. 3. Prioritize ideas before methods - Assess intellectual contribution first; evaluate methodological rigor as a means of strengthening ideas, not eliminating them. 4. Empower editors to lead decisions - Editors should synthesize reviewer input rather than defer to majority votes, especially when ideas are novel or integrative. Note, this is easier said than done. 5. Train reviewers to recognize contribution diversity - Encourage openness to all kinds of work, not only dominant approaches. 6. Normalize editorial risk-taking - Accept that publishing influential work sometimes involves uncertainty & additional developmental effort. 7. Shift journal culture from rejection to discovery - Incentivize editors & boards to actively seek, cultivate, & advance the most intellectually stimulating research in the field. The underlying idea of Detmar's editorial is simple: journals advance disciplines by developing strong ideas, not by perfecting gatekeeping. If journals identify, develop, & publish research that advances & reshapes our understanding, they have a real opportunity to have an impact on the world. Citation: Straub, D. W. (2008). Why do top journals reject good papers? MIS Quarterly, 32(3), iii–vii.
-
Did I Really Have to Play the “Peer Review Grinch” This Holiday Season 🎄? I genuinely wanted to spread end-of-year cheer and close out 2025 with an “accept.” Instead, I found myself recommending a rejection, because the methods and numbers didn’t line up. A few red flags that should never be “normal”: 1. Design confusion: The paper described a randomized crossover, but the allocation language read like parallel groups. Those aren’t interchangeable and it matters for the analysis and the inferences. 2. The power problem: 16 participants across 4 conditions, with lots of outcomes and time points, can quickly become ~4 per condition (depending on what actually happened). That’s a thin foundation for strong claims. 3. Reporting that doesn’t reconcile: When key stats appear duplicated across distinct outcomes, it raises questions about the analysis pipeline and basic reporting checks. And (channeling a kinder Grinch 😄): this isn’t about dunking on one manuscript. It’s a reminder that we need stronger training and stronger editorial guardrails around study design, power planning that matches the actual model, and reproducibility basics. What I’d love to see more of in 2026: clear designs, a prespecified primary endpoint, reproducible power assumptions, and methods you can actually reconstruct from the paper. Shoutout to the authors who make reviewers’ jobs easy with crystal-clear methods; you’re the real MVPs. Happy New Year 🎉 #SportsNutrition #PeerReview #AcademicPublishing #ResearchMethods #Statistics #Reproducibility #StudyDesign #ScienceIntegrity
-
Why do research papers get rejected? I agree, getting research paper rejected can be disappointing, BUT understanding the common reasons behind rejections can help you avoid them in the future. In my experience these following mistakes are common and critical: 1️⃣ Poor Fit with the Journal’s Scope Reason: Submitting to a journal misaligned with your topic. Fit: Review the journal’s aims and scope thoroughly before submission. 2️⃣ Inadequate or Flawed Methodology Reason: Weak study design, small sample size, or flawed data collection methods. Fit: Ensure your methodology is sound, well-documented, and peer-reviewed. 3️⃣ Lack of Originality or Novelty Reason: Papers that repeat existing research often face rejection. Fit: Clearly demonstrate how your work contributes new knowledge or perspectives. 4️⃣ Weak Writing and Presentation Reason: Poor organization or formatting distracts from your research quality. Fit: Use clear language and polish your presentation. 5️⃣ Failure to Follow Submission Guidelines Reason: Overlooking specific journal requirements (e.g., formatting, word count). Fit: Always adhere to the journal’s submission guidelines. 6️⃣ Insufficient Literature Review Reason: Not providing a comprehensive review of existing research. Fit: Conduct an extensive review and ensure your research is well-grounded. 7️⃣ Overstated or Unsubstantiated Claims Reason: Making claims unsupported by data or references. Fit: Be cautious with claims and back them with evidence. 8️⃣ Ethical Issues or Data Manipulation Concerns Reason: Violations of ethical standards, such as undisclosed conflicts of interest. Fit: Follow ethical research standards and be transparent. 9️⃣ Poor Reviewer Feedback Response Reason: Failing to address reviewers’ constructive criticism. Fit: Take feedback seriously and revise accordingly. 🔟 High Rejection Rates for Certain Topics Reason: Some fields have higher rejection rates due to oversaturation. Fit: Target journals that specialize in your niche. ----------------------------------------------------- How to Avoid Rejection ✔️ Research Journal Fit: Choose journals that align with your research topic. ✔️ Strengthen Methodology: Build robust, reproducible methods. ✔️ Polish Writing: Use clear, concise language. ✔️ Address Reviewer Comments: Revise seriously and thoroughly. ----------------------------------------------------- What’s your experience with journal rejections? Share your insights below! 🔄 Repost if you found these tips helpful! Follow Muhammad Haroon for more practical research advice!
-
I have reviewed countless research papers, and the reasons for rejections are often the same. Don’t make these 10 mistakes! 1️⃣ Submitting to the Wrong Journal → If your research doesn’t align with the journal’s scope, it’s an automatic rejection. 2️⃣ Lack of Novelty → Editors look for fresh insights. If your study doesn’t add value, it won’t make the cut. 3️⃣ Flawed Methods → Weak or poorly justified methodology raises major concerns for reviewers. 4️⃣ Poorly Written Abstract → Your abstract is the first impression—if it’s unclear or unfocused, your paper may not even be read. 5️⃣ Outdated or Incomplete Literature Review → Missing key references or failing to position your work within existing research weakens your credibility. 6️⃣ Data Analysis Errors → Inaccurate or inappropriate statistical methods can undermine your entire study. 7️⃣ Overstating Findings → Stay grounded in your data—exaggerated claims will be scrutinized. 8️⃣ Ignoring Submission Guidelines → Formatting, word limits, and citation styles matter. Failure to follow instructions signals carelessness. 9️⃣ Ethical Issues → Issues like undisclosed conflicts of interest, plagiarism, or lack of ethical approval are deal-breakers. 🔟 Not Addressing Reviewer Comments → Revisions are part of the process. Dismissing feedback can cost you publication. What’s been your biggest challenge in getting published? ♻️ Repost, hit follow, and turn on your notifications 🔔 #AcademicPublishing #ResearchTips #PhDLife
-
Future Challenges of Journal Editing Today I contributed to a discussion on the future Challenges of Journal Editing organized by Piotr Makowski at Queen's Business School. I was joined by fellow editors Patrick Haack, PhD and Shuang Ren, and colleagues from the school. Here are some issues I presented for discussion What are the challenges? In recent years there has been a dramatic increase in submissions to journals. At the same time, the shift towards open access has put pressure on journals to publish more papers To manage the increasing volume and maintain quality, desk rejection rates have increased, undermining attempts to follow double blind peer review for all submissions Higher submissions have increased the need for more reviewers, which has resulted in reviewer fatigue, acting as a bottleneck on the peer review process The growing prevalence of paper mills, writing cartels, and abuse of AI increases the risks for journals and wider reputation of research in society What are the potential solutions? To manage higher volumes of papers, journals need to adapt publication criteria and perhaps increase editorial decision power (e.g. desk rejections). This calls for stronger training and mentoring of incoming editors to ensure consistency across teams Increasing rates of desk rejection may necessitate a change in mindset towards other approaches to peer review, such as triple blind review (author identity is hidden from the editor). Whilst triple blind review can reduce inherent biases in editorial decision-making, it can be difficult to operationalise More recognition needs to be given to the important community work of reviewers, with clear paths from high quality reviewing towards membership of editorial boards, and editorial teams Alongside this, journal editors should be increasingly selected not only based on their individual research track record, but their experience and excellence in reviewing and peer review activities Growing reputational risk from unethical practices, requires a shift in focus, with editors not only managing quality, but also the integrity of the peer review process. Learned societies and bodies such as the Committee on Publication Ethics have an important role to play in this future #Peerreview #Research #Publications
-
Peer review is the cornerstone of scholarly publishing. Some reviewers offer gentle, yet unhelpful feedback. Others may be harsh but give insightful comment. Striking the right balance is key. Let me share my approach on being the 'just right' peer reviewer The are 2 parts Part 1: What to pay attention to (per section) Part 2: Scripts on how to critique politely ----------- Part 1: 📝1️⃣ Abstract: • Is it a short, clear summary of the aims, key methods, important findings, and conclusions? • Can it stand alone? • Does it contain unnecessary information? 🚪2️⃣ Introduction: Study Premise: Is it talking about something new on something old? • Does it summarize the current state of the topic? • Does it address the limitations of current state in this field? • Does it explain why this study was necessary? • Are the aims clear? 🧩3️⃣ Methods: • Study design: right to answer the question? • Population: unbiased? • Data source and collection: clearly defined? • Outcome: accurate, clinically meaningful? • Variables: well justified? • Statistical analysis: right method, sufficient power? • Study robustness: sensitivity analysis, data management. • Ethical concerns addressed? 🎯4️⃣ Results: • Are results presented clearly, accurately, and in order? • Easy to understand? • Tables make sense? • Measures of uncertainty (standard errors/P values) included? 9/16: 📈6️⃣ Figures: • Easy to understand? • Figure legends make sense? • Titles, axis clear? 🌐7️⃣ Discussion: The interpretation. • Did they compare the findings with current literature? • Is there a research argument? (claim + evidence) • Limitations/strengths addressed? • Future direction? 📚8️⃣ References: • Key references missing? • Do the authors cite secondary sources (narrative review papers) instead of the original paper? ------------ Part 2: 🗣️ How do you give your critique politely? Use these scripts. Interesting/useful research question, BUT weak method: - The study premise is strong, but the approach is underdeveloped." Robust research method, BUT the research question is not interesting/useful: -"The research method is robust and well thought out, but the study premise is weak." Bad writing: -"While the study/ research appears to be strong, the writing is difficult to follow. I recommend the authors work with a copyeditor to improve the flow/clarity and readability of the text" Results section do not make sense: -"The data reported in {page x/table y} should be expanded and clarified." Wrong interpretation/ wrong conclusion: -"The authors stated that {***}, but the data does not fully support this conclusion. We can only conclude that {***}. Poor Discussion section -"The authors {did not/fails to} address how their findings relate to the literature in this field." Copy this post into a word document and save it as a template. Use this every time you have to review a paper. If you are the receiver of peer review - you can also use this to decode what the reviewer is saying.😉
-
Some tips for fellow researchers, In the past few months, I have been reviewing papers submitted for publication in Springer Nature journals like Discover Sustainability, Agriculture & Food Security, and Discover Agriculture. Since I am familiar with the context, I mostly reviewed studies done in Ethiopia, and here are the common problems I saw with my suggestions. - Old data- you know that paper you worked on years ago, and then one day you pick up your phone, call that friend, and say 'hey, why don't we publish it?' sure good idea, but please update the data and context before submitting. - Carelessness and not double-checking- I still see papers with internal comments left in them. This is how you tell me you don't care without telling me you don't care :) - Minor formatting issues - even great papers had these, so check your details. - Lack of connectedness between sections- especially common when many authors are on the list. great to see that one person is not suffering alone, but when dividing tasks, make sure everyone is paying attention to consistency. - If you are not great at making diagrams, accept it- don't submit messy, hard to understand diagrams, they are meant to ease the concept and catch your readers' attention, not complicate it. Outsource the work if you have to. - Language editing, grammar, punctuation, spelling- writing a paper in your second language is hard, obviously! get help from people if you can. I know many universities might not have writing centers, but use any resources you can find. Most papers have good intentions, but small mistakes happen and lead to rejection or major revision. Try to submit a paper of good quality. Cheers!
Explore categories
- Hospitality & Tourism
- Productivity
- Finance
- Soft Skills & Emotional Intelligence
- Project Management
- Technology
- Leadership
- Ecommerce
- User Experience
- Recruitment & HR
- Customer Experience
- Real Estate
- Marketing
- Sales
- Retail & Merchandising
- Science
- Supply Chain Management
- Future Of Work
- Consulting
- Writing
- Economics
- Artificial Intelligence
- Employee Experience
- Healthcare
- Workplace Trends
- Fundraising
- Networking
- Corporate Social Responsibility
- Negotiation
- Communication
- Engineering
- Career
- Business Strategy
- Change Management
- Organizational Culture
- Design
- Innovation
- Event Planning
- Training & Development