My top 5 reasons for rejecting a manuscript
My top 5 reasons for rejecting a manuscript
My hundredth paper just got published in a peer-reviewed journal the other day. Having been on editorial board and peer-reviewed for many journals myself for nearly two decades now and having reviewed hundreds of papers for both international as well as national journals, I felt it is time to go through all papers that I have rejected in this avatar of mine. I have preserved all papers that I have reviewed. Having had studied them carefully I have crystallized the reasons of rejection and am sharing top 5 of them with you. Should you have any queries on this you are most welcome to email me. You can also contact me through my website www.drpankajdesai.com. It is possible that you may find understanding this matter difficult, but then writing and reviewing a top-class research paper is a highly technical matter. Can’t make it any easier!
1. Matched cohort hidden as a case-control study:
This happens quite often with papers submitted to third and fourth tier journals, but can happen in any journal. (What is tier of a journal: Refer foot note) The authors declare to have done a matched case-control study, where there is indeed “matching”. But, the selection of participants in the study is based on the exposure variable, rather than the outcome. (What is an exposure variable – keep reading. You will understand automatically in case you already do not know)
Why is this important? Well, for one, the design informs the quality of the analyses. But even more fundamentally, the definition! The definition of a case-control study is that it starts with the end -- that is to say, the outcome defines the case. So, if you are exploring whether Methyl Ergometrine is able to prevent PPH, it is wrong to stat that "cases" were defined by those given Methyl Ergometrine and controls were the ones who were not given. In order to label your study case-control, you need to classify your cases as those who experienced the outcome, no PPH in our example, making the controls those that had PPH.
If you are enrolling based on exposure variable (Methyl Ergometrine in this case), even if you are matching on such variables as age, gender, etc., this is still a COHORT STUDY! Just get the design right, please! Now just pick-up any journal that you are used to reading, identify any “case-control” study and apply this criteria. See for yourself how many were indeed COHORT studies passed off and even accepted by the editors and peer-reviewers as “case-controlled” studies!
2. Incidence or prevalence and where is the denominator?
I cannot tell you how irritating it is to see someone refer to the incidence of something as a percentage. Incidence is NEVER in percentage. But as annoying as this is, it alone does not get and automatic rejection. What actually invites a rejection is when someone tells me this "incidence" in a study that is in fact a matched cohort.
By definition, matching means that you are not including the entire denominator of the population of interest, so whatever the prevalence of the exposure may seem to be in a matched cohort is the direct result of your forcing it into this particular mould. In other words, say you are matching 2:1 unexposed to exposed and the exposure is smoking, while the outcome of interest is the development of lung disease. First, if you are telling me that 10% of the smokers developed lung diseases in the time frame please call it prevalence and not an incidence. Incidence must incorporate a uniform time factor in the denominator (e.g., per year). And second, do not tell me what the "incidence" of smoking was based on your cohort -- by definition in your group of subjects smoking will be experienced by 1/3 of the group. Unless you have measure the prevalence of smoking in the parent cohort BEFORE you did your matching, I am not interested. This is just dull and thoughtless, so it definitely gets an automatic reject or a strong question mark at the very least.
3. Analysis that does not explore the stated hypothesis
I just reviewed a paper that initially asked an interesting question (this is how they get you to agree to review), but turned out to turn the hypothesis on its head and ended up being completely stupid. Broadly, the investigators claimed to be interested in knowing how a certain exposure variable in this case human rights violation impacts maternal mortality, a legitimate question to ask. As I was reading through the paper, and as I could not make any heads or tails out of the Methods section, it slowly began to dawn on me that the authors went after the opposite of what they promised: they started to look for the predictors of what they set up as the exposure variable! Instead of concentrating on the impact of Human Rights violation in reducing maternal mortality, they started evaluating what caused human rights violation in their community which they felt in turn affected the maternal mortality. Now, this can sometimes still be legitimate, but the exposure variable needs to be already recognized as somehow relating to the outcome of interest. They had first to prove that human rights violation in a community does impact maternal mortality then go on to what affects human rights and their violation and then maternal mortality. This was not the case here. So, please, authors do look back on your hypothesis once in a while as you are actually performing the study and writing up your results.
4. Stick to the hypothesis, don’t get trapped into advertising!
I recently rejected a paper that asked a legitimate question, but, in addition to doing a substandard job with the analyses and the reporting, did the one thing that is an absolute No-No: it reported on an explicit analysis of the impact of a single drug on the outcome of interest. And yes, you guessed it; the sponsor of the study was the manufacturer of the drug in question. And naturally, the drug looked particularly good in the analysis. I am not against manufacturer-sponsored studies, and even those that end up shedding positive light on their products. What I am against is arbitrary results of haphazard analyses that look positive for their drug without any justification or planning. So, all of this notwithstanding, the situation might have been acceptable, had the authors made a convincing case for why it was rational to expect this drug to have the beneficial effect, citing either theoretical considerations or prior evidence. They of course would have had to incorporate it into their a priori hypothesis. Otherwise this is just advertising, a random shot in the dark, not an academic pursuit of knowledge.
5. Language is a loaded but important issue
I do not want to get into the argument about whether publishing in English language journals brings more status than in non-English language ones. This is not the issue. What I do want to point out, and this is true for both native and non-native English speakers, is that if you cannot make yourself understood, I do not have either time or the ability to read your mind. If you are sending a paper into an English language journal, do make your arguments clearly, do make sure that your sentence structure is correct, and do use constructions that I will understand. As a dear friend, a co-editor and reviewer on many journals Jen G. said “It is not that I do not want to read foreign studies, no. In fact, you have no idea just how important it is to have data from geopolitically diverse areas. No, what I am saying is that I volunteer my time to be on Editorial Boards and as a peer reviewer and I just do not have the leisure to spend hours unraveling the hidden meaning of a linguistically encrypted paper”. And even if I did, I assure you, you are leaving a lot to the reviewer's personal interpretation. So, please, if you do not write in English well, give your data a chance by having an editor take a look at your document BEFORE you hit the submit button.
Foot Note:
• Tier of journals: What you're looking for is the journal's impact index. It's basically a ratio of the number of times an article in the journal was cited v/s how many articles they published. So a high impact index means the papers they publish are important and often referred to in work of other people. Unluckily hardly any journal in obgyn in India can be called a tier I journal.
• Counterfactual: expressing what has not happened but could, would, or might under differing conditions e.g. If kangaroos had no tails, they would topple over.
[THANK YOU JEN AND MARYA ZILBERBERG! YOUR ORIGINAL IN-PUTS AT YOUR BLOGS WERE VERY IMPORTANT]
thanks sir, to open the gate for future world class papers
ReplyDeleteGod Bless You Maheshbhai
Delete