Canadian journalist Donna Laframboise. Former National Post & Toronto Star columnist, past vice president of the Canadian Civil Liberties Association.
The lead author of two retracted COVID-19 papers is editor-in-chief of an Elsevier medical journal.
Earlier this month, two high-profile research papers were retracted on the same day. One, published in The Lancet, had concluded that coronavirus patients treated with malaria drugs were more likely to die. Published on May 22, it was officially withdrawn 13 days later.
Another, published in the New England Journal of Medicine, found no evidence that widely prescribed medications increase the death rate of hospitalized COVID-19 patients with pre-existing heart problems.
The lead author in both instances was Mandeep Mehra, a professor of medicine at Harvard Medical School, and the person in charge of the Heart and Vascular Center at Boston’s Brigham and Women’s Hospital.
The second listed author was Sapan Desai. An online bio describes him as an “internationally-recognized double board certified vascular surgeon.” Desai is the founder of Surgisphere Corporation, a data analytics firm which claimed to have acquired 96,000 highly-detailed electronic medical records of COVID-19 patients from 671 hospitals on six continents.
The Lancet paper’s dramatic findings interrupted drug trials and changed government policy in multiple countries. It also increased the anxiety of coronavirus patients who’d been participating in those trials.
But six days after the paper appeared, more than 100 “clinicians, medical researchers, statisticians, and ethicists” addressed an open letter to the authors, and to Lancet editor-in-chief Richard Horton, questioning the integrity of the cited data.
Why were the hospitals which supplied this data not identified? Why weren’t standard statistical practices employed? Why no ethics review? Why didn’t the paper invite other researchers to examine for themselves the underlying data and computer code?
According to these experts, the medication dose sizes discussed were odd, drug ratios sounded “implausible,” the Australian data was obviously erroneous, and the African data seemed “unlikely.”
Yet none of The Lancet‘s peer-reviewers apparently noticed. “In the interests of transparency,” said the signatories of the open letter, “we also ask The Lancet to make openly available the peer review comments that led to this manuscript to be accepted for publication [sic].”
An article in the New York Times says these events “have alarmed scientists worldwide who fear that the rush for research on the coronavirus has overwhelmed the peer review process.” Lancet editor Horton, it reports, now describes the retracted paper as a “fabrication” and “a monumental fraud.”
A headline in the UK Guardian says The Lancet has made one of the biggest retractions in modern history. How, asks the article that follows,
did a paper of such consequence get discarded like a used tissue by some of its authors only days after publication? If the authors don’t trust it now, how did it get published in the first place?…the sad truth is peer review in its entirety is struggling…
Neither of those articles mentioned an astonishing fact. Lead author Mehra is himself the editor-in-chief of The Journal of Heart and Lung Transplantation. Part of Elsevier’s scholarly publishing empire, this monthly journal hires editors for five-year terms. Mehra’s second term is coming to end, and last year the search for a replacement began.
As the posted job description explains, the editor-in-chief is responsible for overseeing the peer review of papers submitted to that journal. He or she is constantly evaluating research, sorting solid science from weak science. The new editor-in-chief, we’re told, must have “a demonstrated understanding of statistics and statistical methods.”
So how could a man who has spent the past 10 years in such a role have authored this pair of retracted papers? How could anyone with any statistical sophistication have taken such dodgy data at face value?
“No matter which way you examine the data, use of these [malaria] drug regimens did not help,” Mehra declared in a press release when The Lancet paper was published. But it now appears he didn’t directly examine the data at all. On the day the paper was retracted, he explained in a subsequent statement:
Dr. Desai, who served as a co-author and whose team maintained this observational database, conducted various analyses. As first author, these were provided to me, and on the basis of these analyses, we published two peer-reviewed papers…
In other words, this longtime editor-in-chief took someone else’s word for it. He failed to ask elementary questions. He took it on faith that the analyses had been properly conducted. Mehra continued:
It is now clear to me that in my hope to contribute this research during a time of great need, I did not do enough to ensure that the data source was appropriate for this use. For that, and for all the disruptions – both directly and indirectly – I am truly sorry.
This, ladies and gentlemen, is the vaunted peer review system in action. Naive trust. Blind faith. By Mehra. By The Lancet. By the New England Journal of Medicine. Even when real lives, right now, hang in the balance.
Four years ago, I authored a report demonstrating that peer review is merely a sniff test. Typically performed by unpaid volunteers, it’s based on wholly subjective criteria, and is highly influenced by the pre-existing beliefs of those doing the reviewing. My report contains this paragraph:
In 2014, Science announced measures to provide deeper scrutiny of statistical claims in the research it publishes. John Ioannidis, the author of a seminal 2005 paper asserting that most published research findings are false, called this announcement “long overdue”. In his opinion, statistical review has become more important than traditional peer review for a “majority of scientific papers”.
In many places, statistical review still doesn’t occur. Even in our current situation, when COVID-19 research has the power to halt drug trials and change history, the vetting process at medical journals is a joke.