Resident Fellow Scott
On the surface, it makes perfect sense. Prescriptions for hormone-replacement therapy to treat the symptoms of menopause plummeted after interim results from a big government study of the drugs showed they were causing heart attacks. But beneath the surface is another, lesser known story. In the five years since federal researchers first unveiled their results, a series of follow-up studies calculated off the same government data found that many of the initial conclusions were premature, indefinite or just plain wrong.
The $725 million Women's Health Initiative was rooted in some good intentions, but was set against a backdrop of fiscal and political bickering over the efficacy of the costly drugs. Unfortunately, this influenced not only how the findings were computed but also how they were received. As this newspaper's Tara Parker-Pope first reported in July, when initial results confirmed populist refrains that the drugs were being overused, the data were rushed to print with a carefully orchestrated PR blitz, while subsequent efforts to test the initial conclusions were sluggish.
Federal researchers refused to share bottom-line results, even with outside academics or the companies that manufactured the drugs. This allowed them to closely guard their monopoly over the original data and therefore the prerogative to publish follow-up findings. It's a sure bet if the data had been more widely shared, important analyses that debunked some of the initial conclusions would have come to light much sooner.
And unless something is done to make sure that data is shared, there will be many similarly flawed government studies to test the efficacy of drug treatments, especially the politically popular "comparative" studies that pit expensive new medicines against older, cheaper alternatives with the aim of cutting health-care spending.
The reauthorization of the State Children's Health Insurance Program (Schip), created in 1997 to cover children from lower-income families who make too much to qualify for Medicaid, is up for renewal this fall. Tucked into page 414, section 904 of the House bill is a provision to spend more than $300 million to establish a new federal "Center for Comparative Effectiveness" to conduct government-run studies of the economic considerations that go into drug choices.
The center will initially be funded through Medicare but will soon get its own "trust fund." The aim is to arm government actuaries with data that proponents hope will provide "scientific" proof that expensive new drugs are no better than their older alternatives. The trick is to maintain just enough credibility around the conduct of these trials to justify unpopular decisions not to pay for newer medicines.
While there's nothing inherently wrong with this sort of fiscally minded clinical research, Medicare is no ordinary payer: It dictates decisions made in the private market. So as the government begins tying its own payment decisions to the results of its own studies, there's a great temptation to selectively interpret data and arbitrarily release results. Clearly, this obvious conflict of interest demands even more outside scrutiny and transparency than has been the usual fare when it comes to government research.
The inherent complexity and limitations of conducting these sorts of "comparative" drug trials also need to be carefully considered before policy makers rush to tie sweeping payment choices to results of single studies. If not, there's a real risk that faux science and limited findings will be used to set rigid payment policies that will arbitrate access to new treatments for the entire health-care market.
In the case of the hormone-replacement study, although the government initially said the findings applied to all women--regardless of age or health status--subsequent studies using the same data show that the age of a woman and the timing of hormone use dramatically change the risk and benefits. In fact, the findings of these studies seem to directly contradict some of the government's initial conclusions.
For example, women in their 50s who took a combination of estrogen and progestin or estrogen alone had a 30% lower risk of dying than women who didn't take hormones. Also, women in their 50s who regularly use estrogen alone show a 60% lower risk for severe coronary artery calcium, a risk factor for heart attack.
The Women's Health Initiative is hardly the first study to affirm that medical advancement and government cost minimization often make uncomfortable bedfellows. The $135 million, federally run Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial (Allhat) was designed in part to test whether older, less expensive blood pressure pills were as good as newer, costlier drugs. No surprise, the study showed they were, a selective interpretation of results that has subsequently been called into question by many leading experts, including Dr. Michael A. Weber, professor of medicine and associate dean for research at the State University of New York, who was on the original team of Allhat investigators.
Meanwhile, the $40 million federally funded Clinical Antipsychotic Trials in Intervention Effectiveness (Catie) trial "found" that older and less expensive schizophrenia medications were just as good as newer, more expensive ( and many believe far more tolerable) "atypical" anti-psychotic drugs. This result, however, has made little impact on real world medical practice--because few physicians believe the study was credible.
Like the Women's Health Initiative, bottom-line data from Allhat and Catie were subject to parochial secrecy. Catie's complete safety data are only being released this September, almost four years after the study was completed. Moreover, the drugs involved in these studies were for conditions where one expects a great deal of individual variation in how people might respond. The studies didn't take measure of that.
Now the government is sponsoring a poorly designed trial to test whether Avastin, a drug that is meant for injection into the veins to treat cancer, can also--when injected directly into the eye--treat macular degeneration, a leading cause of blindness. Never mind that Avastin's manufacturer, Genentech, developed a completely new drug called Lucentis, which is specifically designed to be injected into the eye and is better adapted to treat blindness.
Since a single cancer infusion of Avastin contains a large volume of the drug, breaking that same dose down into the small aliquots needed for the eye injections is literally pennies on the dollar, making the government's study of it--when it was clearly not designed for eye treatments--a matter of cost containment. Surely if Avastin ends up harming those eyes--a plausible consequence of this off-label, if not illegally "compounded" use--it won't be Uncle Sam on the hook with product liability lawyers, but Genentech.
Not all government-funded studies have speckled histories. Many uncover significant advances. Problems arise when the government pursues studies to achieve its own economic goals, where political motivations seem to intrude on the design and conduct of the trials and bias not only how results are interpreted, but more especially, how they are reported.
The difficult nature of these "comparative" drug studies, the sort contemplated in Schip, requires more care, not less. These studies are hard to execute by their nature, a fact given short shrift by policy makers who believe the conclusions gleaned from the research will provide immediate cost savings.
For one thing, as the Allhat study proved, detecting small clinical differences between two active drugs, such as whether one pill lowers blood pressure more than another, requires very large studies that often fail to capture all of the patient preferences and characteristics that go into real world medical decisions. And once the study is completed, determining whether small differences are clinically meaningful can take years of follow up.
When the trials are under-funded and too small, or are poorly designed or conducted, important differences are not detected, which supports the theory that older drugs are as good as newer ones even if that is not true. This flawed science seems just fine with those who support these trials largely for cost purposes.
The proposal for a "comparative effectiveness" center has become a seductively simple idea that few are willing to challenge in Washington, making it almost inevitable, save a veto of Schip altogether. If the government does start generating this data, it should at least make bottom-line results public so others can test the government's conclusions. Medicare routinely makes its raw medical-claims data--which are far more politically sensitive--available to qualified experts for health research purposes.
The political cover offered by government-directed research will surely help when it comes time to impose unpopular limits on prescribing. That's about the only certainty in this legislative gambit, and maybe the only one that mattered when it was drafted. For many, these proposals weren't about medical discovery but bean counting. What Medicare hasn't achieved in policy circles, it's hoping to impose through the fiat of "science."
Scott Gottlieb, M.D., is a resident fellow at AEI.