Monday 14 December 2009

Cherry Picking

There’s an interesting debate that’s been ongoing about pharmaceutical companies’ approach to data publication. Unusually it’s actually spilled over into the mainstream media (BBC Radio 4 news) and is captured in a head to head article called Is the conflict of interest unacceptable when drug companies conduct trials on their own drugs? (BMJ 2009; 339:b4949 and b4953).

The protagonists of the argument are Ben Goldacre (arguing YES to the proposition) and Vincent Lawton (arguing NO). You can access their viewpoints via the hyperlink in each of their names.

Having heard yet another pharma spokesman be made to sound like a henchman of an evil empire on the radio, I had a closer look at each of their arguments.
(I confess it was also interested having read Ben’s book Bad Science and following him on twitter, he’s always good for some scientific controversy).

Ben uses a number of eye-popping stats to make his argument. The one that jumped out at me was that only 5.9% of industry sponsored oncology trials are on pub-med, and that 75% of those were positive(1). This he says “is the routine grind of publication bias, where disappointing negative results on the benefits of treatments quietly disappear”.

My instant reaction was – does this study really show that drug companies are hiding 94.1% of the data on their marketed products in order to emphasise the “benefits of their treatments”?

Having actually read the Ramsay and Scoggins paper, the answer is NO.

That’s because this data wasn’t for products that made registration, it was for all trials (in any phase) that had taken place, irrespective of whether the product made it to market or not. This is very important, because somewhere between 82% and 95%(2) (notice my use of a range here, rather than cherry-picking) of oncology products never make it to market.

Therefore, it’s not that data is being held back in order to make marketed products look better, the truth is that a great many of those trials were conducted on products never made it to market. There is no benefit to “sex up”, because there is no treatment.

Additionally, in a disease area like oncology which is renowned for it’s fast pace of development and tiny sample sizes (so that new treatments can get to very sick patients) a hell of a lot of trial data is inconclusive. Journals don’t want to publish this stuff (and the editors aren’t shy about declaring this) as doctors don’t want to spend their precious time reading it.

In short, for an accurate representation, this wasn’t a good study and data point to choose. Interestingly, Ben references another paper that would have been much better; the SSRI study(3). In it they follow 12 products that have been approved by the FDA and find that 69% of all studies were published. This doesn’t seem nearly so scandalous a figure, but would have been far closer to being accurate than the one that Ben chose.

The SSRI study doesn’t split out industry vs government publication rates, a comparison that Ben likes to use to make, but a quick search on the internet found another study that does.

Ross et al(4) looked at trial publication after registration in clinicaltrials.gov across all therapy areas. Like the oncology trial above, this too does not take into account product attrition rates, and hence the absolute levels are not relevant, but they do make comparisons across sponsor types.

They found that 40% of industry sponsored clinical trials get published. However, US government sponsored trials fare little better at 47%, and of those trials sponsored by the US National Institutes of Health (NIH), only 41% are published. There is little, if any, real difference here.

As in the oncology study, the academic publishing rate is higher, here at 56%, but the incentive for volume of papers published in the academic setting is common knowledge (in many cases academics are measured on the number of theor publications, not their usefulness) and explains this gap.

I realize that in uncovering the truth behind this set of numbers doesn’t defeat the rest of Ben’s argument. But to be fair, this is the only one of his claims that I’ve looked into, and with the cherry-picking on show here, it does make me question some of the others – perhaps subject matter for further blog posts. What certainly also needs discussion is the practical implementation of what he’s proposing, should his assertions and claims hold true.


Competing interests: Quite obviously I work with the Pharmaceutical industry, but no one has paid or asked me to right this article (dammit).

References:

(1) Ramsey S, Scoggins J. Commentary: practicing on the tip of an information iceberg? Evidence of underpublication of registered clinical trials in oncology. Oncologist 2008;13:925–9

(2) Walker, I., Newell, H.  Do molecularly targeted agents in oncology have reduced attrition rates?Nature Rev. Drug Discov. 8, 15-16 (2009).

(3) Turner EH, Matthews AM, Linardatos E, Tell RA, Rosenthal R. Selective publication of antidepressant trials and its influence on apparent efficacy. N Engl J Med 2008;358:252-60

(4) Ross JS, Mulvey GK, Hines EM, Nissen SE, Krumholz HM (2009) Trial Publication after Registration in ClinicalTrials.Gov: A Cross-Sectional Analysis. PLoS Med 6(9): e1000144. doi:10.1371/journal.pmed.1000144

No comments: