Yesterday, I discussed the increasing tendency of big pharma to invest in a few very large development programs, essentially taking a few big bets versus more small bets, and highlighted the concern that this sort of approach seemed extremely risky. I suggested that a more diverse portfolio of programs could make a lot more sense, especially since the ability to forecast sales years in advance appears to be quite poor, and all things being equal, I'd be more confident in the ability of five smaller programs to achieve at least one disproportionately large success than in the ability of one large program to achieve or exceed its commercial goals.
At the same time, I thought it might make sense to outline what I see as the real-world thinking behind big pharma's big bets, a strategy that boils down to the fact that in practice, all things are never equal.
For starters, I suspect companies have a lot of confidence in their ability to predict commercial success, or at least accurately evaluate market opportunity and judge market size, especially for products seen as better options in established markets (e.g. the PCSK9 programs in the news today). In fact, I've never spoken with a senior commercial executive who didn't exude exceptional confidence in his or her department's forecasting abilities - one executive from a large biotech even boasted his group had a "real rocket scientist" cranking out the math.
It is difficult to find published data on this point, and the limited data I've seen paint a less optimistic picture. For example, Munos (see here, figure 6a) points to the low frequency (21%) of approved drugs achieving the historical blockbuster benchmark of $1B in sales as evidence of forecasting failure; I cite unpublished consulting data I saw (and posted above my desk) while I was still in academia, nearly a decade ago, demonstrating the absence of correlation between consensus peak sales estimates (by wall street analysts) at the time of drug approval and ultimate peak sales achieved by the same drugs. It's tempting to wonder if the confidence in forecasting is driven by the utility and convenience of having a forecast to guide planning, rather than by the actual accuracy of such forecasts (my take here; Kahneman's here).
(Readers might also enjoy this related discussion of the challenges of forecasting success in the start-up space.)
Second, if faced with the choice of funding five smaller development programs or a single large program, I can imagine thinking that focusing on a single program might provide both scientific and operational advantages.
Scientifically, it enables the company to select the program where the chances of it actually working seem the highest - again, in the case of the PCSK9 programs, the target has an unusually high degree of scientific validation, including very good human genetics, very good animal models, and even a useful biomarker to follow.
In addition, from a purely operational perspective, concentrating on a single, ultra-high priority program enables a company to bring its best resources to bear, rather than spreading them out across multiple competing initiatives.
The real question is whether all the unknowns in drug development - uncertainties about safety in particular, but also about efficacy and ultimate commercial opportunity (including, increasingly, reimbursement) - swamp the facts you think you have a good handle on - risks that would also be associated with smaller programs, incidentally, and might even be greater if the program's target seems less well-understood.
I continue to worry about the wisdom of large bets, and find the idea of advancing a more diverse portfolio of smaller, focused programs targeting urgent and unmet needs much more compelling.
At the moment - and considering not stated sentiment, but only actual resource allocation (i.e. the amount of development money going to a handful of very expensive programs) - many big pharma companies would seem to disagree.