Presented by Vera Hassner Sharav
14th Tri-Service Clinical Investigation Symposium
Sponsored by
The U.S. Army Medical Department
and
The Henry M. Jackson Foundation for the Advancment of Military Medicine
May 5-7, 2002
The cornerstone of public trust in medical research is the integrity of academic institutions and the expectation that universities, which rely on public funding, have a responsibility to serve the public good. Financial conflicts of interest affect millions of American people – those who are subjects of clinical trials testing new drugs and those who are prescribed drugs after their approval. Yet, the leadership paid little attention to the issue until a stream of tragic and unseemly public revelations has shaken public trust in academic research.
In January 2002, the Association of American Medical Colleges (AAMC) approved a report by its task force stating: "Financial conflicts of interest of clinical investigators… [is] the single issue that [ ] poses the greatest threat to maintaining public trust in biomedical research." [1] The report did not address institutional conflicts of interest which create a culture that collides with the humanist tradition.
Physicians reading the current issue of JAMA[2] will be startled to learn that a team of Harvard University professors are advising physicians NOT to prescribe new drugs to their patients because their safety has not been established – despite FDA approval. Adverse drug reactions,[3] they acknowledge, is the leading cause of death in the U.S. They analyzed the 25-year record of drug label changes (between 1975 to 1999) as they appeared in the Physician's Desk Reference and found that 548 new drugs were approved during that period. Of these 20% required subsequent black box warnings about life threatening drug reactions, half of these adverse effects were detected within 2 years others took much longer. Sixteen drugs had to be withdrawn from the market because they were lethal.
The JAMA report provides a basis for evaluating the value and relevance of clinical trial findings for clinical care. It also provides a basis for measuring FDA's performance as gatekeeper in preventing hazardous drugs from reaching the market. They found that clinical trials are underpowered to detect uncommon, but potentially lethal drug reactions. Their design, biased selection, short duration, and accelerated approval process almost ensures that severe risks go undetected during clinical trials. The JAMA report validates the findings of a Pulitzer Prize winning investigative report in the Los Angeles Times by David Willman.[4]
Willman uncovered evidence demonstrating the adverse consequences of the 1992 Prescription Drug User Fee Act (PDUFA), the law that brought industry money and industry influence to the FDA. The approval process for new drugs was accelerated and the percentage of drugs approved by the FDA increased from 60% approval at the beginning of the decade to 80% approval by the end of the 1990s. Willman reported that the FDA was the last to withdraw several drugs that had been banned by European health agencies. There was a concomitant precipitous rise in the approval of lethal drugs: between Januray1993 and December 2000, seven deadly drugs were brought to market only to be withdrawn after they had been linked to at least 1,002 deaths.[5] In a follow up article, August 2001, [6] Willman reported that the list of lethal drugs withdrawn since Sept 1997 had jumped to a dozen – 9 had been approved after 1993.
None of the drugs were for life-threatening conditions, one was a diet pill, another for heartburn, another an antibiotic that proved more dangerous than existing antibiotics. The approval of these drugs illustrates the collision between corporate interests and the public interest. Corporate interests revolve around maximizing profits through the marketing of new, expensive drugs, but corporate interests collide with public safety interests. FDA's "expert advisory panels" demonstrate FDA's loss of independence. Most advisory panel members have undisclosed financial ties to the manufacturer whose drugs they recommend for FDA approval.[4]
Corporate influence in academia:[7]
Until 1980 a firewall existed separating industry and academia to ensure that academic pursuits were independent of commercial influence. When the Bayh-Dole Act of 1980 encouraged "technology transfer," that firewall was removed, allowing federally funded universities to patent and license inventions developed by faculty members. Researchers and institutions were free to enter into ventures and partnerships with biotechnology and pharmaceutical companies, and they did. It is estimated that of the $55 billion to $60 billion spent by the biomedical industry on research and development, large companies spend one fifth at universities, small companies spend one-half.[8] With the flow of corporate money, came corporate influence and control. The culture within academic institutions changed: business ethics swept aside the moral framework within which academia had functioned. Gone were such niceties as intellectual freedom and a free and open exchange of ideas, so was full disclosure of research findings. Gone was the culture of social responsibility, or a social conscience. Finally, the absence of independent, third party review has put the integrity of the process and the quality of the products in jeopardy.
The investigative series in the Seattle Times[9] provides insight into that changed culture at the Fred Hutchinson Cancer Center during the mid 1980s. The copiously documented series examined the conduct of research and patient care in two cancer trials. It illustrates how a new entrepreneurial culture in medicine encouraged doctors to push the limits beyond what can be considered, ethical research, by subjecting patients to unjustifiable risks and increased suffering. At the Hutch a physician with a conscience who clearly did not embrace the new entrepreneurial ethos blew the whistle.
It has been said, "Doctors fear drug companies like bookies fear the mob."[10]
Researchers whose findings collide with corporate interests, are finding out that academic freedom is no longer operational. Two high profile examples from our Canadian neighbors illustrate that researchers can face intimidation by both corporate sponsors and university administrators. In 1996 Dr. Nancy Olivieri[11] found that a generic drug for thalassemia, manufactured by Apotex, the sponsor of the tirals, failed to sustain long-term efficacy. Dr. Olivieri informed Apotex and the chair of the institution's research ethics board (REB) and moved to inform patients in the clinical trials of the risk-; as is her ethical obligation. Apotex terminated the two trials and warned Olivieri of legal consequences if she informed patients or anyone else. Apotex, meanwhile had reportedly contributed $13 million to The University of Toronto.
When Olivieri attempted to publish her findings, Apotex threatened to sue her for breach of confidentiality. The University failed to defend Olivieri and the principles of research ethics or academic freedom. The University threatened to dismiss her, initiating a biased inquiry and knowingly relied on false accusations by company- funded investigators – all of which were later discredited by an independent investigation by the Canadian Association of University Teachers.[11] Olivieri's publication of her negative findings was delayed for two years. The case is a dramatic illustration of conflicts of interest and the collision between corporate interests and the right of research subjects to be informed of any identified risks – as required by the principle of informed consent.
Another example of the clash between academic freedom and corporate interests, again involving retribution by the University of Toronto, involves Dr. David Healy,[12] a prominent psychopharmacologist and historian of psychiatry at the University of Wales. Healy had been hired to head the Mood Disorder Program at the University's Center for Addiction and Mental Health. The program is reported to get 52% of its funding from corporate sources, and the Center received $1.5 million from Eli Lilly. After Healy criticized the drug industry in an article published by The Hastings Center, Eli Lilly withdrew its $25,000 contribution to Hastings. When Healy delivered a paper expressing his concern about the risk of suicide in some patients taking antidepressant drugs – such as Prozac– the University rescinded his appointment. Academic freedom is but one casualty of corporate influence.
As Marcia Angell correctly observed [13] in her last editorial in the NEJM, corporate influence in medicine is ubiquitous, extending far beyond individual physician-researchers: its influence determines what research is conducted, how it is done, and the way it is reported. Short-term corporate goals take priority over society's long-term needs. Under corporate influence, more research is done comparing trivial differences between one drug and another, less research is done to gain knowledge about the causes of disease.
The pharmaceutical industry spends $15 billion[14] to buy loyalty of health care providers and allied professionals– educators, investigators, and non-profit organizations. Drug companies shower physicians with gifts, honoraria, global junkets, and provides fees for patient referrals for clinical trials. They endow academic chairs and programs, provides grants, stock equity, patent royalty fees to researchers and institutions–even publication attribution is controlled by sponsoring companies. They make contributions to professional associations and patient advocacy groups, and sponsor their conferences.
The American Medical Association sells the rights to its "physicians' master file" with its detailed personal and professional information on every doctor practicing in the United States, to dozens of pharmaceutical companies for $20 million. [15] That database provides drug marketers with invaluable information. Journals and the media profit from drug advertising income. Such financial inducements assure industry a fraternity of loyal allies, among them journal editors, who protect their own interests and those of their corporate benefactors. For example, the British journal, The Lancet, reported that the editor of the British Journal of Psychiatry had published a favorable review of a drug while he was receiving an annual fee of 2,000 (British pounds) from the drug's manufacturer.[16] Although clinical research is highly competitive, the interdependent collaborative network of stakeholders tightly controls a self-administered opaque oversight system.
The pharmaceutical industry also buys political influence in Congress and the administration. Public Citizen[17] reported that there are 625 pharmaceutical industry paid lobbyists in Washington, one for every congressman. Industry spent $262 million on political influence in the 1999-2000 election. That's more than any other industry. This influence ensures the industry profit enhancing legislation and reduced regulation. Since the 1992 Drug User Fee Act (PDUFA) which precipitated fast-track drug approval, congress passed the 1997 FDA Modernization Act providing industry with huge financial incentives – a six- month patent extension for drugs tested in children. These legislative initiatives are a financial bonanza for the drug industry, translating into billions of dollars in revenues – a six month patent extension can generate as much as $900 million for a single drug.[18].
However, the accelerated pace in research and in the drug approval process has had an enormous toll in human casualties. Adverse drug reactions are the leading cause of death in the United States – women and the elderly are at special risk.[19] The LA Times revealed that between Sept. 1997 and Sept. 1998, nearly 20 million Americans took at least one of the harmful drugs the FDA had been forced to withdraw.[4] A comparison of FDA's 25 year drug approval-withdrawal record analyzed by Lasser, et al,[2] in JAMA, and the LA Times analysis of FDA's recent five year record raises alarms: 16 drugs withdrawn within 25 years, 12 within five. Most of those withdrawn drugs had been approved after 1993. The LA Times noted, "never before has the FDA overseen the withdrawals of so many drugs in such a short time."
Since 1994, reports in the press described ethical violations that undermined the safety of subjects in clinical trials, causing some to die when they might have lived.[20] The violations occurred because a culture of expediency had replaced a culture of personal moral responsibility. Systemic ethical violations were revealed at the nation's leading research centers[21] – including, Duke, University of Pennsylvania, New York Cornell Medical Center, Johns Hopkins, Fred Hutchinson, NIMH, University of Maryland, and Harvard in China. The evidence demonstrates that the problem is not merely a few rogue investigators– the problem is an entrenched insular system and weak federal oversight.[22] The federal Office of Protection from Research Risks (now, OHRP) was forced (temporarily) to shut down clinical trials at some of the nation's most prestigious institutions.[23]
In September 2000, near the end of her term as Secretary of HHS, Donna Shalala acknowledged in NEJM, "I did not expect, or want, to complete my tenure . . . by raising questions about the safety of patients in clinical research. However, recent developments leave me little choice. . ."[24] Unfortunately, the only initiative taken was to reorganize the federal oversight agency (now OHRP) under a new director who believes that education and a collaborative system of voluntary accreditation will repair the damage.[25] I disagree. Ethical violations such as failure to disclose risks and to protect the welfare of patient-subjects are the result of conflicts of interest – not poor education.
An example of complicity by government officials who provide a shield of secrecy, while claiming "transparency:" On February 7, 2002, the Alliance for Human Research Protection[26] requested a copy of current proposals that have been received by the Secretary of HHS in accordance with Section 407 of federal regulations (45 CFR 46. Subpart D). Subpart D protects children – who are incapable of exercising the right to informed consent– from experiments involving greater than minimal risk if there is no potential benefit to them. However, section 407 provides an appeal process to the Secretary. The regulation stipulates that nontherapeutic research with no potential direct benefit to the child, may be permitted if the Secretary, after consultation with "a panel of experts in pertinent disciplinesand following an opportunity for public review and comment" finds "the research presents a reasonable opportunity to further the understanding, prevention or alleviation of a serious problem affecting the health or welfare of children."
Our request was denied with the following statement:[27] "Release of information would interfere with the agency's deliberative and decision-making processes. Further, each researcher has a commercial and privacy interest in the release of any information." A similar reason was given for denying disclosure of the list of experts: "Release of expert identities associated with the review of individual protocols would interfere with the agency's deliberative and decision-making process and have a chilling effect on the ability of the agency to obtain frank and candid opinions from its reviewers." This is an example of federal officials attempting to block public access to information guaranteed under federal regulation.
The role IRBs and bioethicists have in this enterprise:
Ostensibly, IRBs were established to serve as gatekeepers to protect human subjects. But lacking independence, they actually function as facilitators for the accrual of grant monies by their parent institutions. It is not surprising, therefore, that IRBs have failed to protect research subjects from harmful experiments or to weed out research that fails to meet scientific justification. Specifically, what conclusion is one to draw from the fact that 90% of the protocols approved by the IRB at the NIMH, apparently failed to meet either ethical and / or scientific justification? Following an investigative series in The Boston Globe,[28] in 1998, the director of NIMH ordered an independent evaluation of all 89 clinical trials at the Institute. The result: 29 were suspended at once, and an additional 50 protocols were put on probation for lack of scientific justification – that's 79 out of 89.[29]
In "Pharma Buys a Conscience," Dr. Carl Elliott, [30] director of the Bioethics Center, Minnesota, (who happens to be a physician) is an insightful critical examination of bioethics. Elliott criticizes his colleagues who have been seduced by corporate financial incentives. He points out how conflicts of interest have undermined the professional integrity of bioethics. He lists ethics consultants and their corporate benefactors,[31] as well as what he calls, "corporate-academic dating services" that match academic "experts" with businesses seeking expertise. He notes that corporate money and corporate influence is so entrenched at university medical centers that overt threats need not be explicitly made, everyone knows what's expected. Bioethicists are in demand because they lend the appearance of legitimacy to corporate ventures. Therefore, corporations funnel money to bioethics centers, and pay bioethicists retainers to serve on their advisory boards. But, as Elliott points out, "The problem with ethics consultants is that they look like watchdogs but can be used like show dogs."
Indeed, bioethicists have lent the seal of legitimacy to highly questionable, if not outright unethical research. Their corporate affiliations are not publicly disclosed when they render opinions in the media or on IRBs, or on government advisory panels. An institutionalized veil of secrecy shields academics who sit on government appointed advisory panels. While their recommendations affect public policy, those recommendations may also serve the financial interests of the corporations that pay them.
In 1997, I testified before the National Bioethics Advisory Commission (NBAC) about financial conflicts of interest, betrayal of trust, and the undue influence of drug companies in medicine. I pointed out that physicians who accept large payments to refer patients for clinical trials testing the safety and efficacy of new products are breaching medical ethics. The Wall Street Journal, for example, reported that doctors with academic affiliations have been paid as much as $30,000 per patient per drug trial[32] in schizophrenia and Alzheimer's studies.
Following the testimonies, Dr. Harold Shapiro, chair of the NBAC and President of Princeton, indicated that the NBAC would not focus on financial arrangements of research investigators because, "after all, this is a capitalist country." Dr. Shapiro neglected to mention that he was drawing a salary from Dow Chemical Company, on whose advisory board he sat.[30] Such publicly undisclosed personal financial arrangements by academics who sit on public policy advisory boards are not at all unusual. The public is under the illusion that so-called "expert advisory panels" are independent, and render objective, disinterested recommendations. The public does not suspect that these panelists from academia have financial ties to biochemical companies, and therefore, conflicts of interest. No one is held accountable for formulating public policy recommendations that serve an undisclosed self-interest.
What chance does a vulnerable individual patient have as an outsider confronting a fraternity of insiders – all of whom have something to gain from his participation as a subject? The system serves its stakeholders. Revelations about the system's failure to protect human subjects from preventable harm have come to light, not because of any internal safety mechanisms, but as a result of information provided by conscientious whistle blowers and investigative press reports.
Following are my "dirty dozen" corrupt research review practices that undermine both the safety of human subjects and the integrity of research findings:
1. Efficacy by design: washout / placebo; unequal dose comparison = bias.
2. Subject selection bias: younger, healthier subjects than those likely to be prescribed treatment; randomization criteria; recruitment coercion.
3. Assessment of risk / benefit: entirely subjective, it depends who is assessing.
4. IRB evaluation and approval process: vote without examination of protocol; intimidation; IRB shopping.
5. Misleading disclosure documents = Uninformed Consent.
Non-disclosure: there's no benefit; newly identified risks = Uninformed Consent.
6. Suppressing adverse event reports: "don't ask, don't tell"
7. Interpretation of findings–"efficacy in expert hands is not the same as clinical effectiveness"[33]
8. Biased advisory panels: FDA panels recommend drugs that kill.
Bioethics ethics: conscience for hire;
9. Professional guidelines, recommendations.
10. Corrupted published data: suppression of negative findings; ghost authorship.
11. Complicit government oversight officials fail to enforce, preferring to redefine the standards: Who is a human subject? What's a condition? Can children's assent be called consent?
12. Using patients as laboratory animals in symptom provocation, relapse inducing experiments.
Case 1: Placebo design: ethics vs financial stakes
Corporate influence begins with the protocol design and subject selection. For example, unequal dosage comparisons will elicit different side effects that may skew the results. Selective inclusion criteria can effectively hide adverse side effects that will later be reveled in clinical practice. Drug "washout' followed by placebo allows sponsors to manipulate the condition under which a new drug is tested. Specifically, by making patients very sick during washout, the efficacy of the new drug is likely to be inflated. Such manipulations may explain the reason that a drug's efficacy in clinical trials is not usually matched under normal clinical conditions.
The use of placebo control trials in patients for whose condition an effective treatment exists has been the subject of heated debate. The FDA has been severely criticized for its placebo control policy because it undermines patient's best interest in violation of the Declaration of Helsinki. Of particular concern is the risk of suicide in severely depressed or psychotic patients who are at increased risk when their condition is destabilized by drug "washout" and placebo. They are at risk whether the drugs are an effective treatment or not because psychotropic drugs are associated with severe withdrawal symptoms.
Carl Elliott described his battle with the university's IRB when he challenged placebo control trials: "Tables were pounded. Faces turned scarlet. Blood pressures soared. Yet the IRB continued to approve many of the trials, over my objections and those of other members of the committee. The hospital administration eventually dissolved the IRB and reconstituted it with new membership."[26] Elliott explains that the reason for the explosive reaction was that "everyone's interests were involved" – not just the sponsoring drug company. These trials generated huge income for the hospital and investigators alike, some earning between $500,000 and $1 million a year.
Case 2: Biased Clinical Guidelines:
An investigative report by Jeanne Lenzer[33] in the British Medical Journal (March 2002) sheds light on the underlying factors that led the American Heart Association to "definitely recommend" a treatment that could cost more lives than the disease itself. In August 2000 the Heart Association promoted alteplase (tPA), manufactured by Genentech, as a treatment for "brain attack." The Association upgraded its recommendation of tPA for stroke, placing it in the class I category. It did so despite the fact that most controlled trials showed that such thrombolytics increase mortality rates in acute ischaemic stroke. In it's annual report it described tPA as follows: "A clot-busting drug that helped revolutionize heart attack treatment, tPA holds enormous potential for the treatment of ischemic stroke, which accounts for 70 to 80 percent of all strokes. It is estimated that tPA could be used in 400,000 stroke case per year to save lives, reduce disability and reverse paralysis."[33]
The Heart Association made its bold recommendation on the basis of a single controlled clinical trial conducted by the National Institute of Neurological Diseases and Stroke (NINDS). Six other randomized studies reached the opposite conclusion. Lenzer reported the following: the NINDS study design ensured a favorable finding for tPA because the patients selected to get tPA had mild stroke scores at baseline compared with patients selected for the placebo arm who had worse strokes. Furthermore, only one fifth of those initially diagnosed were found to have stroke. This, of course put those non-stroke patients at increased risk of harm with no potential benefit. There were two observational studies reaching opposite conclusions. The Cleveland study found that twice as many patients given tPA died compared with those that did not.
Most suspicious of all, however, is the refusal by NINDS to reveal the raw data for that single trial. Lenzer's request under the Freedom of Information Act was rejected. Furthermore, the company vigorously opposes a head to head study comparing alteplase to streptokinase for myocardial infarction. Dr. Elliott Grossbard, a Genetch scientist, provided the company's position: "We don't know how another trial would turn out[another study] may be good for America, but it wasn't going to be a good thing for us."[34]
The panel of experts who wrote the Heart Association's Clinical Practice Guideline recommending tPA failed to mention the catastrophic results from the Cleveland study. According to the BMJ article, eight of the nine expert panel members had financial ties to the manufacturer, Genentech. Dr. Jerome Hoffman, the single panel member who did not have ties to Genentech wrote a dissenting opinion that was not even acknowledged by the panel. Hoffman questioned the tPA endorsement in a BMJ article, charging that the NINDS findings were artificially manipulated to exclude 95% of stroke patients.[35]
Lenzer reported that Genentech had contributed over $11 million to the Heart Association and also paid $2.5 million to build the Heart Association a new headquarters. Only after the these financial conflicts of interest became public knowledge, did the Heart Association revise its class I recommendation and withdraw statements that tPA "saves lives."
The Heart Association is hardly unique: a recent report in JAMA[36] (2002) found that 87% of the authors who wrote treatment practice guidelines in all fields of medicine had financial ties with the pharmaceutical industry. In 1998 the NEJM found that 96% of medical journal authors whose findings were favorable to a product had financial ties to the manufacturer.[37] As questions have been raised about the value of mammography and other cancer screening recommendations, one grows suspicious that most highly publicized screening campaigns are launched by stakeholders with financial interests in the business. Their recommendations may turn out to be hazardous to public health.
Case 3: Subject selection bias–antidepressant drug trials:
Dr. Thomas Laughren, head of the FDA's psychiatric drug division made the following concessions at a Houston conference (2000): "there is a certain amount of myth" in the claimed efficacy of psychotropic drugs which have shown only marginal effect above placebo. "We don't know how effective they are, only that in clinical trials, they demonstrated somewhat greater efficacy than placebo." He then acknowledged: "there isn't any standard for what effect size is required to get a psychotropic drug on the market.we have never, in my experience, not approved a drug because of a finding that the effect size is too marginal."[38]
To obtain even a marginal effect above placebo, 60% to 85% of patients who are most likely to be prescribed antidepressant drugs are excluded by the eligibility criteria. That's the finding of a Brown University analysis[39] of 31 antidepressant trials published from 1994 to 1998. Only 15 percent of 346 depressed patients who were evaluated in a Rhode Island hospital psychiatric clinic would have met the eligibility requirements of a standard drug trial. Such a selection process inevitably skews the results, thereby invalidating the published findings and claims about the efficacy of antidepressants. Zimmerman expressed concern: "If antidepressants are, in fact, not effective for some of these large subgroups of depressed individuals, their prescription incurs an unjustifiable exposure of risks and side effects, and alternative treatments need to be considered."
I would also argue that if the patients in clinical trials don't resemble the patients who are later prescribed these drugs – what relevance do the trials have for clinical care?
Case 4: Antidepressant drug efficacy hype:
A report in the April 10, 2002 issue of JAMA by prominent psychopharmacologists who conducted a major government sponsored, [40] 12 -site, controlled clinical trial comparing sertraline (Zoloft), Hyperricum perforatum (St. John's wort) and placebo. The investigators acknowledged:
"An increasing number of studies have failed to show a difference between active antidepressants and placebo. Many of the presumed factors underlying this phenomenon were carefully attended to in this study, e.g, adherence to quality control by rater training, treatment adherence monitoring, inclusion of experienced investigators, and carefully defined entry criteria. Despite all of this, sertraline failed to separate from placebo on the two primary outcome measures"
Between December 1998 and June 2000, 340 Adult outpatients with major depression and a baseline total score on the Hamilton Depression Scale (HAM-D) of at least 20 were recruited and randomly assigned to receive (900 to 1500 mg) St. John's wort, (50 to 100 mg) Zoloft, or placebo for 8 weeks. Responders at week 8 could continue blinded treatment for another 18 weeks. The results of this trial states: "on the 2 primary outcome measures, neither [Zoloft] nor [St. John's wort] was significantly different from placebo." Full response occurred in 31.9% of the placebo-treated patients vs 23.9% of the [St John's] – treated patients and 24.8% of [Zoloft]-treated patients."
Clearly a dual dilemma faces those who are invested in promoting psychopharmacolgy: if they admit that the drugs don't really work, then placebo-controlled trials are ethically justified. However, absent a demonstrable benefit of the drugs, it is unethical to expose patients to the known side effects and the potential long-term risks of harm. But such an acknowledgement would undercut the financial interests of the pharmaceutical industry and all of the stakeholders who depend on corporate largesse. The prominent psychiatrists, whose names are too numerous to be listed at the head of the JAMA article, found a way to spin the negative results of the trial. In their conclusion they ignore their own findings, namely, that neither the antidepressant drug, Zoloft, nor St. John's wort were more effective than placebo. Indeed, placebo may have an edge. In their conclusion the investigators pretend that Zoloft was not part of the 3-arm trial: "This study fails to support the efficacy of H perforatum in moderately severe major depression."
An accompanying JAMA editorial by Dr. David Kupfer,[41] past president of the American College of Neuropsychopharmacology, also puts a spin on the findings:
"The current study on the use of St John's wort in the treatment of MDD is the second one within a year to conclude that St John's wort is not effective. These trials were conducted because, even though St John's wort is widely used for the treatment of major depression and depressive symptoms, its efficacy has not been clearly established"
How could these prominent leaders of psychiatry draw a conclusion that contradicts the study findings? In compliance with JAMA's conflict of interest disclosure policy, a long list appends the article disclosing some of the authors' financial ties to industry – it speaks for itself.
A troubling question arises: Why did the editors of JAMA fail to seek an independent evaluation of the research findings? Why did JAMA select a psychiatrist whose financial ties include membership on the advisory board of Pfizer, the drug company whose product was being reviewed? [42]
Case 5: Undisclosed negative data:
An editorial in the British Medical Journal by Richard Smith, "Maintaining the Integrity of the Scientific Record,"[43] stated: "We editors of medical journals worry that we sometimes publish studies where the declared authors have not participated in the design of the study, had no access to the raw data, and had little to do with the interpretation of the data. Instead the sponsors of the study – often pharmaceutical companies – have designed the study and analyzed and interpreted the data. Readers and editors are thus being deceived."
Even when a legitimate physician who does not have financial conflicts of interest reviews a study, there is no assurance that the process has not been corrupted. Here is an example: in 2001, Dr. Michael Wolfe was asked to write an editorial in JAMA about the findings of a six -month study testing the arthritis drug, Celebrex, on more than 8,000 patients. [44] The editors sent him the manuscript reporting indicating they were anxious "to rush the findings into print." Based on the data reported in the manuscript, Wolfe wrote a favorable review. When he later saw the complete data – as a member of an FDA advisory panel– he was "flabbergasted." To his embarrassment he discovered that the study had actually been a year long, and when all the data was evaluated, Celebrex offered no proven safety advantage over two older drugs in reducing the risk of ulcers. He also learned that the study's 16 authors included faculty members of eight medical schools – they were all employees of the manufacturer, Pharmcia, or paid consultants. JAMA's editor, Catherine DeAngelis, is quoted in the Washington Post, saying: "We are functioning on a level of trust that was, perhaps, broken."[31] Peer review and the integrity of medical guidelines and the scientific literature have all been corrupted by the corrosive influence of industry.
Case 6: The 1997 "pediatric rule" puts children's lives at risk:
Children are being sought to serve as "risk bearing subjects" to risk their lives to test drugs. For example, the FDA approved a pediatric trial exposing 100 children to Janssen Pharmaceutica's heartburn drug, Propulsid.[45] FDA approved the trial and allowed babies to be enrolled even after the drug had been linked to sudden deaths. The babies who were recruited were diagnosed with gastroesophageal reflux – a condition hardly considered life-threatening. Doctors say that most babies outgrow the problem by their first birthday. Among the casualties was a 9-month old infant, Gage Stevens. He was recruited by researchers at the University of Pittsburgh. According to press reports the parents only learned about the risks associated with Propulsid from an Associated Press report AFTER their baby was dead.
The LA Times reported that Propulsid's danger to the heart was identified as early as January 1995, when FDA's senior gastrointestinal expert informed Janssen executives that recent adverse-reaction reports showed their drug was prolonging the QT interval, perhaps resulting in deaths. The British Medicines Control Agency (BMCA) had warned against any use of Propulsid in infants since 1998, and cautioned against prescribing it to children up to age 12. The consent form given to the parents falsely indicated that the FDA had approved Propulsid for children. The parents said the doctor conducting the clinical trial was adamant that Propulsid was the best treatment for their child. The parents said they would never have consented, had they known of the previous deaths. The mother was quoted by CBS News, exclaiming: "It's like giving you chemotherapy for a toothachethe benefits just don't outweigh the risks. I mean, it's reflux! It's not something that's (going to kill him)."[46] The final blow was delivered when the baby's parents learned from the autopsy report that Gage's esophagus did not show any signs of "significant inflammation or other hallmarks of gastroesophageal reflux."[47] In other words, the baby didn't have the condition for which he was entered as a subject into a fatal clinical trial.
A spokesman for Janssen (a Johnson & Johnson subsidiary) indicated that the company did not promote Propulsid for use by children. However, the LA Times reported, the company acknowledged that it did make two "educational grants" to the North American Society for Pediatric Gastroenterology and Nutrition. The society's literature advised doctors that Propulsid could be used safely and effectively in children.
FDA did not pull the drug off the market even as the death toll rose. In December 2000, the LA Times reported that overall Propulsid has been cited as a suspect in 302 deaths. FDA administrators now concede that the agency failed to contain Propulsid's fatal risk. In comments to an FDA advisory committee in June 2000, FDA's Dr. Florence Houn said: "The labeling probably was not effective." In the end, it was not government intervention that forced Janssen to stop marketing Propulsid in the U.S., it was litigation. I question the wisdom of a policy that encourages the use of children in drug trials BEFORE the safety and efficacy of the drugs have even been established in adults.
Case 7: Children exposed to risks in psychotropic drug trials:
Psychotropic drugs are being tested in children despite the acknowledged risks of harm. Psychotropic drugs are advertised as normalizing a "chemical imbalance" in the brain. In fact, they do the opposite: they induce profound changes in the central nervous system with demonstrable physical and neurological impairments.[48] Dr. Steven Hyman, former director of NIMH, an expert on the mechanisms by which psychoactive drugs work, explained that, whether abused or prescribed, the mechanisms by which psychoactive drugs work are the same.[49] Hyman stated that antidepressants, psychostimulants, and anti-psychotics created "perturbations in neurotransmitter function."[50] The drugs' severe adverse side effects are symptoms of the drugs' disruptive effect on the neurotransmitter system and on brain function.
In 2001 Dr. Benedetto Vitiello, NIMH's director of Child and Adolescent Treatment and Preventive Interventions Branch acknowledged the impact of FDAMA: "pediatric psychopharmacology has recently seen an unprecedented expansionclinical trials in youths has more than doubled in the last few years."[51] Indeed, children as young as three are being recruited to test mind-altering drugs that may affect their developing brain. Parents are being offered financial inducements to volunteer their children for drug trials. The foremost problem with prescribing or testing psychotropic drugs for children is the absence of any objective criteria for diagnosing children with pathological behavioral problems to justify pharmacologic intervention. Vitiello acknowledged "diagnostic uncertainty surrounding most manifestations of psychopathology in early childhood."[52] Vitiello also acknowledged the possibility of long-term harm: "The impact of psychotropics on the developing brain is largely unknown, and possible long-term effects of early exposure to these drugs have not been investigated."
Eli Lilly's highly touted new anti-psychotic, Zyprexa,[53] reveals much about the collision between corporate interests and the health and safety of children. In clinical trials averaging 6 weeks, Zyprexa was tested in 2,500 adults. The drug was linked to serious, in some cases life-threatening side effects requiring hospitalization in 22% of those tested.[24]Acute weight gain of 50 to 70 lbs is usual, and with it the increased risk of diabetes. FDA data (under FOIA) reveals a 65% drop out rate, and only 26% favorable response. During those 6 week clinical trials there were 20 deaths, of which 12 were suicides.[54] David Healy, who found a suicidal link to antidepressants (Selective Serotonin Reuptake Inhibitors) in his research says, as far as he can establish, the data from these trials "demonstrate a higher death rate on Zyprexa than on any other antipsychotic ever recorded." [55] In 2000, FDA approved Zyprexa for short- term use only, in bi-polar patients.[56]
Yet, children aged six to eleven were recruited for clinical trials to test the drug. According to their published report, UCLA investigators tested Zyprexa on children who were not even diagnosed as having schizophrenia. The children were diagnosed as having a variety of questionable psychiatric disorders, including ADHD.[57] According to the published report, all the children in the trial experienced adverse effects, including sedation, acute weight gain, and akathisia (restless agitation). The trial was terminated less than six weeks after it had begun.
Controversy surrounds a Zyprexa trial at Yale University. In that experiment, 31 youngsters aged 12 to 25 who have not been diagnosed with any psychiatric illness are being exposed to the drug for one year. The stated rationale given by the researchers (who are under contract with he sponsor) is their speculation that these children may be "at risk" for schizophrenia. Since there are, as yet, no objective tests or biological markers for the illness – they hypothesize without evidence, merely on the basis of conjecture. The shaky basis for their conjecture is that assumption that the children may develop schizophrenia because one of their siblings has been diagnosed with the disorder.
The risk of schizophrenia for the general population is 1%. For siblings the risk increases from 2% to 15% – in other words there is 85% likelihood that these children will never develop schizophrenia.
Given the absence of scientifically accurate tools for interpreting psychiatric symptoms, psychiatrists cannot as yet accurately diagnose schizophrenia much less predict which children will get it. Is it ethical to expose healthy children to risks of drug- induced pathology on such speculation? The Wall Street Journal aptly noted that such a study "raises the question of whether the drug companies are mainly interested in "creating" a new illness that requires drug treatment."
Conflicts of interest in clinical trials result in deadly medicine:
Conflicts of interest have corrupted the soul of the American university, the ethics of medicine, the integrity of the scientific record, and the safety of patients who serve as human subjects in pre- and post-marketing clinical trials. Adverse drug reactions in FDA-approved drugs are the leading cause of death in the United States.[2], [3] The JAMA report advises physicians against prescribing new drugs "unless they represent an important medical advance" because newly approved drugs are likely to be unsafe – even lethal. The JAMA report corroborated the findings of the LA Times earlier report: in some cases FDA approved new drugs despite pre-marketing evidence indicating potential danger. In his editorial in JAMA, FDA's Dr. Robert Temple attempts to disavow agency responsibility, while acknowledging: "Premarketing trials in a few thousand (usually relatively uncomplicated) patients do not detect all of a drug's adverse effectsand sometimes the postmarketing discoveries cause the drug to be withdrawn."[58]
Why did the FDA's track record of protecting the public from unsafe drugs worsen since 1993? The answer is undue corporate influence and a tainted drug testing and approval process that has compromised the safety of both clinical trial subjects and patients in clinical care. The absence of independent, third- party reviewers has undermined the safety of the drug development and approval process. A tainted process has led the FDA to approve deadly drugs that killed patients while enriching those drugs' manufacturers. The LA Times reported that seven lethal drugs that were ultimately withdrawn between 1997 and 2000, generated $5 billion in sales. It remains to be seen how the American public will react to the revelation that new drugs are less safe than old drugs. How will Americans respond to the revelation that when they take a new, FDA-approved drug, they are essentially testing the drug's safety? Public trust is not likely to be restored until the integrity of the process and the institutions is restored through independent unbiased review. When the condition is life-threatening, or when the new drug offers a significant advance over existing treatments, the risks may be justified. But no one should have to die from a heartburn drug or a diet drug.
[1] Kelch, RP, "Maintaining the public trust in clinical research," NEJM, Jan 24, 2002, vol 346: 285-287.
[2] Lasser KE, et al "Timing of New Black Box Warnings and Withdrawals for Prescription Medications," JAMA, May 1, 2002, 287:2215-2220.
[3] Lazarou J, Pemeranz B, Corey PN. "Incidence of adverse drug reactions in hospitalized patients: a meta-analysis of prospective studies," JAMA, 1998, 279: 1200-1205. See also, Wood AJ. The safety of new medicines: the importance of asking the right questions. JAMA.1999, 281:1753-1754.
[4] Willman, D. "How a New Policy Led to Seven Deadly Drugs" Los Angeles Dec 20, 2000, Front page, http://www.latimes.com/news/nation/reports/fda/lat_fda001220.htm.
[5] The seven drugs (and the number of suspected deaths they caused) are: Lotronex (5 deaths), Redux (123 deaths), Raxar (13 deaths), Posicor (100 deaths), Duract (68 deaths, including 11 liver failures), Rezulin (391 deaths, 91 liver failures), Propulsid (302 deaths) . The figure is based on adverse drug reaction reports submitted to the FDA which is estimated to reflect 10% of actual adverse drug reactions. See Willman, 2000.
[6] The five additional drugs withdrawn between 1997 and Aug 2001 are: Raplon, Hismanal, Seldane, Pondomin, and Baycol. See, Willman, D. "Drug Tied to Deaths Is Pulled" Los Angeles Times, Aug 9, 2001, Front page
[7] Bodenheimer, T. "Uneasy Alliance — Clinical Investigators and the Pharmaceutical Industry," The New England Journal of Medicine — May 18, 2000 — Vol. 342, No. 20,
http://www.nejm.org/content/2000/0342/0020/1539.asp and Bodenheimer, T. "Conflict of interest in clinical drug trials: a risk factor for scientific misconduct," DHHS Conflict of Interest Conference, Aug 15. 2000 http://ohrp.osophs.dhhs.gov/coi/bodenheimer.htm
[8] Kowalczyk, L. "New steps urged on university research bias," The
Boston Globe, Feb 21, 2001, http://www.boston.com/dailyglobe2/052/nation/New_steps_urged_on_university_research_biasP.shtml
[9] Willson, D. "Uniformed consent" investigative series, The Seattle Times, March 11-15, 2001, Front page. http://seattletimes.nwsource.com/uninformed_consent/breastcancer/story1.html.
[10] Harold Elliott, a psychiatrist at Wake Forest University, quoted by Carl Elliott, Ref 30
[11] Thompson, J, Baird, P, and Downe, J."Report of the Committee of Inquiry on the Case Involving Dr. Nancy Olivieri, the Hospital for Sick Children, the University of Toronto, and Apotex, Inc." Canadian Association of University Teachers, 2001, http://www.caut.ca/english/issues/acadfreedom/Olivieri%20Inquiry%20Overview.pdf.
[12] Boseley, S. " Bitter pill," The Guardian, Monday May 7, 2001
http://www.guardian.co.uk/Archive/Article/0,4273,4181987,00.htm.
[13] Angell, M. "Is Academic medicine for sale?" NEJM, 2000, 342: 1516-1518.
[14] Dembner, A. "Drug firms woo doctors with perks: Billions spent in bid to gain brand loyalty" The Boston Globe, 5/20/2001 Front page.
[15] Stolberg, SG and Gerth, J, "MEDICINE MERCHANTS / Tracking the Doctors, " The New York Times, November 16, 2000, at: http://www.nytimes.com/2000/11/16/science/16PRES.html?printpage=yes
This comprehensive master physician list gives this industry the most powerful marketing tool in the world. "Overall spending on pharmaceutical promotion increased more than 10 percent last year, to $13.9 billion from $12.4 billion in 1998."
[16] "Just how tainted has medicine become?" Editorial, The Lancet, Vol. 359, Number 9313, April 6, 2002
[17] Public Citizen Report. The Other Drug War: Big Pharma's 625 Washington Lobbyists, July 23, 2001, http://www.citizen.org/congress/drugs/pharmadrugwar.html.
[18] Zimmerman, R. "Drug Makers Find a Windfall Testing Adult Drugs on Kids," The Wall Street Journal, Feb 5, 2001.
[19] A 2000 study of nursing home patients, for example, found that of the 20,000 fatal or life-threatening medication reactions, 80% were preventable. A 2001 report by the General Accounting Office, "Drug Safety: Most Drugs Withheld in Recent Years Had Greater Health Risks for Women." See, Cohen, J. OVER DOSE: The Case Against the Drug Companies, Tarcher /Putnam, 2001.
[20] For example: Roe, L. Dangerous experiments. Investigative Report, Channel 5 KSTP-TV News, Minneapolis, MN, October 26, 1994; Willwerth, J. Madness in fine print. 1994. Time. November 7: 62-63; Horowitz, J. 1994. For the sake of science. LAT Magazine. September 11. cover story; and Hilts, P. 1994. Agency faults a UCLA study for suffering of mental patients. NYT. March 10. A-10; Beil, L. Psychiatric research raises legal red flag. 1996. Dallas Morning News. April 29: H-1, 10A; Doris, M. Experimental ethics. 1996. Boston Phoenix October 24: online at http://www.bostonphoenix.com/alt1/archive/new/96/10/24/ETHICS.html>; Epstein, K. C. and Sloat, B. "Drug trials: do people know the truth about experiments? Series. 1996. The Plain Dealer. December 15-18: front page; Weiss, R. 1998. Research volunteers unwittingly at risk. Washington Post. August 1: A-1; Kerr, K. 1998. Informed consent? Drug researchers criticized for involving psychiatric patients without fully explaining the risks. NY Newsday. September 8: C-6-7; Wadman, M. 1998. Research roulette: are the Maryland Psychiatric Research Center's schizophrenia studies harming patients?" City Paper. July 1; Birnbaum, G. 1999. Human guinea pigs: State eyes 'no consent' medical testing. NY Post, January 17: front page; Monmaney, T. U.S. suspends research at VA hospital in L.A. LAT, March 24: front page; Bonfield, T. 1999. UC research deaths go unreported: memos says rules were broken. Cincinnati Enquirer. April 28: front page; Hilts, P. J. and Stolberg, S. G. 1999. Ethics lapses at Duke halt dozens of human experiments. NYT, May 13. A-26; Bonnfield, T. 1999. UC defends human research: Patients told not to worry about reporting methods. The Cincinnati Enquirer, April 30: PAGE?; Wadman, M. 1999. NIH ethics office clamps down on Duke. Nature. May 20: 190; Kaplan, S. and Brownlee, S. 1999. Duke's hazards: did medical experiments put patients needlessly at risk?" U.S. News & World Report. May 24: 66-70; Manier, J. and Berens, M. 1999. UIC tolerated research ethics lapses, critics say. Chicago Tribune, September 3:1; Berens, M. J. and Manier, J. 1999. Safeguards get trampled in rush for research cash. Chicago Tribune, September 5: front page; Michaud, A. Lawmakers urging thorough investigation of UC research. 1999. Cincinnati Enquirer. April 27;.Nelson, D. and Weiss, R. 2000. Gene test deaths not reported promptly. Washington Post. January 31: A1; Kaufman, M. and Julien, A. 2000. Medical research: can we trust it?" Investigative series. The Hartford Courant, April 9-11: front page; Nelson, D. and Weiss, R. FDA halts experiments on genes at university: probe of teen's death uncovers deficiencies. Washington Post, January 22. A-1; Levine, S. and Weiss, R. 2001. Hopkins told to halt trials funded by U.S. Washington Post. July 19. Front page. Online at ; Kolata G. 2001. U.S. suspends human research at Johns Hopkins after a death. NYT, July 20. Online at ;
Wilson, D. and Heath, D. 2001. The blood-cancer experiment. Uninformed consent series. Seattle Times. July 20. front page. Online at: ; Caplan, A. 2001. Research ban at Hopkins a sign of ethical crisis. Opinion. Special to MSNBC, July 20. Online at http://www.time.com/time/health/article/0,8599,230358,00.html.
[21] The institutions were: Veterans Affairs Greater Los Angeles Health Care System; Rush-Presbyterian-St. Luke's Medical Center; Friends Research Institute, Inc., West Coast Division; King Drew Medical Center; Duke University Medical Center; Virginia Commonwealth University; University of Oklahoma, Tulsa Campus; University of Colorado Heath Sciences Center; University of Pennsylvania and Johns Hopkins University; University of Illinois, Chicago (involved all Federally supported research); University of Alabama, Birmingham; and University of Texas Medical Branch at Galveston. See OHRP website. Letters of determination. Online at: http://ohrp.osophs.dhhs.gov/detrm_letrs/lindex.html
[22] U.S. Department of Health and Human Services. Office of the Inspector General, Report,.1998. Institutional Review Boards: A Time for Reform. June, OEI-01-97-00913. In a follow-up report, the Inspector General was highly critical of Federal agencies for making "minimal progress" at carrying out any recommendations for improving the performance of university panels that review research involving human subjects. She had laudatory words only for the director of the Office of Protection from Research Risks who had suspended research at seven institutions since 1998, suggesting that such enforcement actions should continue. See, and OIG Report. 2000. Protecting Human Research subjects. April, OEI-01-00197.
[23] The institutions were: Veterans Affairs Greater Los Angeles Health Care System; Rush-Presbyterian-St. Luke's Medical Center; Friends Research Institute, Inc., West Coast Division; King Drew Medical Center; Duke University Medical Center; Virginia Commonwealth University; University of Oklahoma, Tulsa Campus; University of Colorado Heath Sciences Center; University of Illinois, Chicago, University of Alabama, Birmingham; and University of Texas Medical Branch at Galveston., University of Pennsylvania and Johns Hopkins University. See OHRP website. Letters of determination. Online at : http://ohrp.osophs.dhhs.gov/detrm_letrs/lindex.html.
[24] Shalala, D. 2000. "Protecting research subjects–what must be done," NEJM. 343: 808-810
[25] See, Brainard, J. 2000. NIH and FDA Should Do More to Protect Human Research Subjects. Chronicle of Higher Education. April 13, p. A-38. See, Hilts, P. 2000. Medical-Research Official Cites Ethics Woes. The New York Times. August 17.
[26] Alliance for Human Research Protection, letter to Tommy Thompson, Sec. Of HHS, Feb. 7, 2002.
[27] DHHS letter, March 29, 2002, in AHRP file.
[28] Whitaker, R. and Kong, D. 1998. Doing harm: research on the mentally ill. Investigative series. Boston Globe. November 15-18: front page.
[29]Shelton, D L. "Ethical concerns focus microscope on research rules
AMA News staff. March 1, 1999.
[30] Eliott, C., "Pharma buys a conscience," The American Prospect, Sept 24-Oct 8, 2001, vol. 12
[31] For example, Stanford University's Center for Biomedical Ethics reportedly received a $1 million gift from SmithKine Beecham. The University of Pennsylvania's Center for Bioethics receives funding from such corporate giants as Monsanto, Millennium Pharmaceutical, Geron corp, Pfizer, AstraZeneca, E.I. duPont, and Human Genome Sciences.
[32] See, Stecklow, S. and Johannes, L. 1997. Test Case: drug makers relied on clinical researchers who now await trial. Wall Street Journal, Aug. 15, front page.
[33] Lenzer, J. ""Alteplase for stroke: money and optimistic claims buttress the "brain attack" campaign," BMJ, March 23, 2002, 324: 723-729
[34] Lenzer citing: Marsa, L. Prescription for profits: how the pharmaceutical industry bankrolled the unholy alliance between science and business. 1997, NY: Scribner, p. 160.
[35] Hoffman, JR, "Against: And just what is the emperor of stroke wearing?" BMJ, 2000, 173: 149-150. Dr. Hoffman stated: "Despite the enormous propaganda machine pushing the exciting fashion of thrombolytic therapy for acute ischemic stroke, there is good reason to question the efficacy of such therapy and overwhelming reason to question its effectiveness."
[36] Choudhry NK, Stelfox HT, Detsky AS. "Relationships between authors of clinical practice guidelines and the pharmaceutical industry," JAMA 2002. 287: 612-617.
[37] Bodenheimer, T. "Uneasy alliance – clinical investigators and the pharmaceutical industry," New Eng J Med, 2000, 342: 1539-1544; See also, Bodenheimer, T. "Conflict of interest in clinical drug trials: a risk factor for scientific misconduct," DHHS Conflict of Interest Conference, Aug 15. 2000 http://ohrp.osophs.dhhs.gov/coi/bodenheimer.htm.
[38] Thomas Laughren, M.D. "FDA's perspective on the use of placebo in psychotropic drug trials," University of Texas, Dept. of psychiatry and Committee for the Protection of Human Subjects. Placebo in mental health research: science, ethics and the law. April 7 and 8, 2000. Audio tape in author's possession.
[39] Brown University Press Release: February 28, 2002, at: http://www.brown.edu/Administration/News_Bureau/2001-02/01-091.html
See, Zimmerman, M, American Journal of Psychiatry, March 2002
[40] "Effect of Hypericum perforatum (St John's Wort) in Major Depressive Disorder: A Randomized Controlled Trial," JAMA Vol. 287 No. 14, April 10, 2002, at:
http://jama.ama-assn.org/issues/v287n14/rfull/joc11936.html.
[41] Kupfer, D. J.MD and Frank, E. "Placebo in Clinical Trials for Depression
Complexity and Necessity," Editorial, JAMA. 2002;287:1807-1814.
[42] Dr. Kupfer also serves on the advisory boards of Eli Lilly and Forest. See, Kupfer, DJ, Findling, RL, Geller, B and Ghaemi, N, "Treatment of bipolar disorder during childhood, adolescent, and young adult years, Journal of Clinical Psychiatry, http://www.psychiatrist.com/audiograph/kupfer/index.htm.
[43] Smith, R. " Maintaining the integrity of the scientific record," British Medical Journal, Sept., 2001, vol 323: 588.
[44] Okie, S. "Missing data on Celebrex: full study altered picture of drug," the Washington Post, Aug. 5, 2001, Front page: http://www.washingtonpost.com/ac2/wp-dyn/A33378-2001Aug4?language=printer.
[45] Willman, D "Propulsid: A Heartburn Drug, Now Linked to Children's Deaths,"
LAT, December 20, 2000, Front page. http://www.latimes.com/news/nation/reports/fda/lat_propulsid001220.htm.
Wednesday, December 20, 2000.
[46] "FDA Criticized For Delay Pulling Drug," CBS Evening News April 26, 2000 http://www.cbsnews.com/stories/2000/04/26/eveningnews/main189083.shtml.
[47] Spice, B. Science Editor. 2000. Was baby treated for ailment he didn't have? Pittsburgh Post-Gazette. July 9. Online at: http://www.post-gazette.com/healthscience/20010709gage0709p5.asp.
[48] Madsen Al, et al, 1998. Neuroleptics in progressive structural brain abnormalities in psychiatric illness. The Lancet. 352: (9130) 784; Harrison P, et al. 1999. Review: the neuropathological effects of antipsychotic drugs, Schizophr Res. 40:87‑99 and Gur, R.E, et al. 1998. A follow‑up magnetic resonance imaging study of schizophrenia. Archives of General Psychiatry. 55: 145‑152 and Gur, R.E., et al, 1998. Subcortical MRI volumes in neuroleptic‑naive and treated patients with schizophrenia. American Journal of Psychiatry. 155:1711‑1717, http://ajp.psychiatryonline.org/cgi/content/full/155/12/1711#F1J and Jauss M. 1998. Severe akathisia during olanzapine treatment of acute schizophrenia. Pharmacopsychiatry. 31:146‑8.
[49] Hyman, SE. and Nestler, EJ. 1996. Initiation and adaptation: a paradigm for understanding psychoactive drug action. Am J Psychiatry. 153:151-162. See also, Konradi, C., et al. 1996. Amphetamine and dopamine-induced immediate early gene expression in striatal neurons depends on postsynaptic NMDA receptors and calcium. Journal of Neuroscience. 16:4231-9.
[50] Hyman. Ibid., p. 151
[51] Vitiello, B " psychopharmacology for young children: clinical needs and research opportunities," Pediatrics, Oct 2001, Vol. 108 Issue 4, p983, 7p. Quote, p. 987 (estimated).
[52] Vitiello, quote, p. 983.
[53] Zyprexa was approved by the FDA in 1996 for adult schizophrenia.
[54] Whitaker, R. Mad in America, Perseus Books, 2001, p. 281.
[55] Dr. David Healy, "Testing psychotropic drugs in children," April 30, 2002, see: ahrp.org/children/healy0402.php.
[56] Zyprexa (olanzapine) was approved by the U.S. Food and Drug Administration, on March 19, 2000 for "the short-term treatment of acute manic episodes associated with bipolar disorder."
[57] See Krishnamoorthy, J. and King, B. H. 1998. Open-label olanzapine treatment in five preadolescent children. Journal of Child and Adolescent Psychopharmacology. 8:107-13.
[58] Temple, RT and Himmel, MH "Safety of Newly Approved Drugs Implications for Prescribing," JAMA, Editorial, Vol. 287 No. 17, May 1, 2002, p. 2273.