Does the pharmaceutical industry exaggerate their R&D costs?
Reprinted from BoingBoing on behalf of David Ng
One of the principle claims for allowing pharmaceutical companies to continue their hold on current patent practices, is that research and development (or R&D) is very expensive. It just keeps coming up, and seems to be all the rage when arguing against things like the passing of Bill C-393 (which you can learn more about in this recent Boingboing post). Although the fact that there are high costs is obviously true, a recent paper published in Biosocieties would suggest that the oft cited statistics, the ones always used to support this assertion for lobbying or public relations purposes, may in fact be over inflated.
Here, the authors, Donald W. Light and Rebecca Warburton look closely at where these numbers come from:
“The most widely cited figures (by government officials and the industry’s trade association for its global news network) for the cost to discover and bring a new drug (defined as a ‘new chemical entity’ or ‘new molecular entity’; not a reformulation or recombination of existing drugs) to market are US$802 million in 2000. This has been updated by 64 per cent to $1.32 billion in 2006.”
From this paper, we basically learn that the primary source of these figures come from one particular study published in 2003 and done by Joseph DiMasi, Ronald Hansen, and Henry Grabowski at the Tufts Center for the Study of Drug Development in Boston, Massachusetts. In general, there are issues of bias in how such figures were calculated, and the Light and Warburton paper systematically looks at a number of variables that would suggest that the $802 million number, as well as subsequent numbers which extrapolate from this figure, are a gross over-estimate.
The paper is definitely worth a read, having a number of points that would suggest strong mistrust for these industry figures. Examples include:
1. High potential for bias in the data that was used in the 2003 study: this includes issues related to exclusivity of access to the data (i.e. we have numbers, but it’s not clear what the numbers represent exactly since only the Tuft authors know), or to the sample set itself. i.e. the pharmaceutical industry appeared to have primary control over whether they would participate as well as what data was provided to the Tufts Center.
2. The figures do not include a number of special and substantial tax provisions for R&D work. i.e. Just because such tax measures help in our R&D costs, industry feels that it should not be included in these final figures.
3. About 50% of this $802 million figure is actually due to “profits foregone.” In other words, as Light and Warburton succinctly describe, this equates to a ‘You owe us for all our R&D costs, plus what we would have made had we not undertaken the project in the first place.’ Although this is obviously a factor for businesses to take into account, the authors ask whether this is really an appropriate way to calculate figures used to lobby for government protected pricing? Nevermind the fact, that one could argue that R&D costs are a necessity when innovation is key element for an industry.
4. That trial costs as well as time estimates are inflated. For trial costs, a number of different discrepancies occur. For example, how much exactly does it cost for human test subjects during clinical trials:
The DiMasi team refers rather opaquely to a complicated set of steps taken to arrive at the mean cost per trial and per subject. The resulting figure of $23 572 per subject is six times the average cost per subject of $3861 reported by the National Institutes of Health for 1993, at the costly (later) end of the DiMasi period (1983 to 1994)
Related to this, is the estimate of how long does the R&D process take, since longer time spans obviously equate to higher costs. Again, these estimates appear to be greatly exaggerated:
The $802 million estimate is based on 52 months for preclinical research, 72 months for trials and 18 months for regulatory review, a total of 142 months or 11.8 years (DiMasi et al, 2003a, pp. 164-166). Maximizing the length of time not only dramatizes how long and hard companies work to discover and develop a new medicine, but also maximizes the multiplication of profits foregone. Long development times are a major reason given for needing high prices. These figures, however, do not square with the lengths for trials actually reported by companies to the US FDA in the Federal Register. Trial length declined from almost 8 years for trials started in 1985 to less than 3 years for trials started in 1995 (Keyhani et al, 2006). Regulatory review times dropped from 2! years to less than a year. Thus, for medicines that started testing in 1995, total trial and review time was down to less than 4 years in the United States and even less in Europe.
Anyway, it’s causing quite a stir, and the Tufts Center for the Study of Drug Development have already issued a terse press release in rebuttal. In any event, it’s good reason to be more informed in such matters, because it has implications in the wider scheme of things – such as how your health dollars are spent, and also how to make policy more effective when dealing with global health issues.
LINK: Demythologizing the high costs of pharmaceutical research, BioSocieties (2011) 6, p34-50 (Note that there is full text access to this article for the month of March 2011, or you can find the article by typing “Demythologizing the high costs” into a search engine).