Posts Tagged ‘ drugs ’

New book – The Recovery Revolution. The Battle Over Addiction Treatment in the United States


The book The Recovery Revolution. The Battle Over Addiction Treatment in the United States, written by Claire D. Clark, could be of interest to H-Madness readers. The abstract reads:

In the 1960s, as illegal drug use grew from a fringe issue to a pervasive public concern, a new industry arose to treat the addiction epidemic. Over the next five decades, the industry’s leaders promised to rehabilitate the casualties of the drug culture even as incarceration rates for drug-related offenses climbed. In this history of addiction treatment, Claire D. Clark traces the political shift from the radical communitarianism of the 1960s to the conservatism of the Reagan era, uncovering the forgotten origins of today’s recovery movement.

Based on extensive interviews with drug-rehabilitation professionals and archival research, The Recovery Revolution locates the history of treatment activists’ influence on the development of American drug policy. Synanon, a controversial drug-treatment program launched in California in 1958, emphasized a community-based approach to rehabilitation. Its associates helped develop the therapeutic community (TC) model, which encouraged peer confrontation as a path to recovery. As TC treatment pioneers made mutual aid profitable, the model attracted powerful supporters and spread rapidly throughout the country. The TC approach was supported as part of the Nixon administration’s “law-and-order” policies, favored in the Reagan administration’s antidrug campaigns, and remained relevant amid the turbulent drug policies of the late twentieth and early twenty-first centuries. While many contemporary critics characterize American drug policy as simply the expression of moralizing conservatism or a mask for racial oppression, Clark recounts the complicated legacy of the “ex-addict” activists who turned drug treatment into both a product and a political symbol that promoted the impossible dream of a drug-free America.

New issue – BMGN-Low Countries Historical Review. Blurring Boundaries: Towards a Medical History of the Twentieth Century


The first 2017 issue of BMGN-Low Countries Historical Review (Blurring Boundaries: Towards a Medical History of the Twentieth Century) is out now and includes two articles that may be of interest to h-madness readers.

Benoît Majerus, ‘Material Objects in Twentieth Century History of Psychiatry’. The abstract reads as follows:

Interest in the history of psychiatry in the social sciences manifested itself in the sixties and seventies at a moment when concepts such as marginality and deviance appeared as a thought-provoking path to rewrite the history of Western societies. This history of madness faces a turning point. Material culture, as this paper’s line of argument expounds, allows one to remain faithful to the critical heritage of the sixties and seventies while still opening up the field to alternative questions by integrating new actors and themes hitherto largely ignored. It allows nuanced narratives that take into account the structural imbalances of power while at the same time being attentive to the agencies of all the actors, as well as the failures of the institutional utopias.

Gemma Blok, ”We the Avant-Garde’. A History from Below of Dutch Heroin Use in the 1970s’. The abstract reads as follows:

In the 1970s the Netherlands (like many other western countries) was shocked by a sudden wave of heroin use. The heroin ‘epidemic’ is  currently framed as a public health problem that has been solved in a commendably humane fashion. In the mean time heroin users have gained a ‘loser image’. Using memoirs written by and interviews with former heroin users, this article argues that heroin use was initially linked to cultural rebellion, self-development and social criticism. We need to take this forgotten aspect of the history of the Dutch heroin ‘epidemic’ into account when we try to explain this historical phenomenon.

Hans Pols: “Treating Mental Illness Before It Strikes”

As we have mentioned before, H-Madness publishes an essay every month for the online magazine Psychiatric Times.  We know, however, that many H-Madness readers do not subscribe to the magazine (although, it is free).  In order to be sure that you don’t miss any of the pieces there, we will be making a point of posting those essays and reviews here on H-Madness as well.  What follows is a commentary posted last month at Psychiatric Times, written by co-editor of H-Madness, Hans Pols.

Treating Mental Illness Before it Strikes

by Hans Pols

Hans Pols is senior lecturer at the Unit for History and Philosophy of Science at the University of Sydney. He is interested in the history of psychiatry and the mental hygiene movement in North America and Europe, psychiatric war syndromes, and colonial psychiatry, in particular in the Dutch East Indies.

Psychotic episodes are devastating for the individuals who have them, their friends, and families. Wouldn’t it be wonderful if individuals could receive treatment before the first psychotic episode strikes, so that it could be avoided altogether? After all, an ounce of prevention is better than a pound of cure. Unfortunately, in psychiatry, we are a long way from achieving primary prevention—there is no vaccine for psychosis, nor have clear genetic markers for severe and persistent forms of mental illness been identified. Throughout the twentieth century, psychiatrists have therefore focused their attention on the early detection of signs and symptoms of mental ill health, assuming that early treatment will stop conditions from becoming worse. However, the ideal of secondary prevention can only be realized if these early signs and symptoms, or a “pre-psychotic syndrome” can be identified successfully. During the twentieth century, psychiatrists have defined many of these “pre-mental illness syndromes”; unfortunately, it has not always been demonstrated that they indeed constitute the early phases of severe and persistent forms of mental illness.

In June 2011, a number of Australian newspapers reported that a high-profile medical trial targeting psychosis in young adults would not go ahead. It was to be conducted by Prof. Patrick McGorry, who had been Australian of the year (an honorary and mostly symbolic title bestowed by the Australian government on an unusually deserving citizen advocating worthy causes). In the proposed trial, youths as young as 15 would receive Seroquel when they were first diagnosed, not with psychosis but with attenuated psychosis syndrome (previously called psychosis risk syndrome). Treating young adults with this syndrome would nip the danger in the bud—their potential psychosis would be treated before it even arose. The trial was to have been sponsored by the drug’s manufacturer, AstraZenaca, which, like many pharmaceutical companies, was probably eager to test its medication on a younger age group to expand the market for its medications. What could be wrong with such a commendable initiative?

Attenuated psychosis syndrome is proposed for inclusion in DSM-V and has attracted an unusual amount of discussion (and dissent). In particular, its relation to psychosis is unclear. Emeritus professor Allan Frances, who chaired the task force which produced DSM-IV, is a fierce critic of the concept. According to him, there is hardly any evidence that attenuated psychosis syndrome, if left untreated, will ultimately develop into a full-blown psychosis (current estimates state that this will happen in merely 10 to 20% of cases). The number of “false positives” is therefore staggering. According to him, treating a group of individuals of whom 90% would never become psychotic appears to be a waste of resources and a rather risky proposition.

McGorry’s proposed trial was widely criticized by psychiatrists world-wide, raising a number of significant ethical problems. First, there is the high number of false positives who would receive medication for a condition they would never develop if left untreated. The trial would not target incipient psychosis but probably address more or less unrelated conditions. This leads to the second ethical problem: a great number of young people would therefore be put on a medication they do not need. This would not matter so much if only aspirin or vitamin tablets were tested. Unfortunately, Seroquel has many highly undesirable side-effects including extreme weight gain and diabetes. One should only prescribe it when it is absolutely necessary.

Last April, AstraZeneca settled a lawsuit by the U.S. government after allegations it paid kickbacks to doctors while promoting the drug for unapproved uses by children, the elderly, veterans and prisoners for $525 million (New York Times, July 28, 2011). It has also settled, for $647 million, product liability cases for misleading patients about the risks of diabetes and weight gain associated with the use of the drug. Total expenses in legal fees associated with Seroquel are now $1.9 billion, which constitutes less than five months of Seroquel sales. Not a great medication to prescribe to individuals who do not need it.

McGorry’s proposed research has attracted (unfavorable) media attention (in Australia); I highlight it here not because it is exceptional or unusual in any way, but instead because it illustrates ways of thinking that have been part and parcel of twentieth century psychiatry. The most important of these is the ideal of secondary prevention in psychiatry: it is imperative to treat psychiatric conditions when they first appear and when they are not as serious as they could become if left untreated. This prevents them from becoming worse and less responsive to treatment. This strategy is of course commendable when there is a proven link between these less serious conditions and more serious ones. In most cases, it has been assumed that such a link exists; it has hardly ever been demonstrated.

The emphasis on prevention is not unique to psychiatry but characterizes developments in several (if not all) medical specialties. In days long since gone, one would see a dentist when one’s toothache became unbearable—today, dentists fill cavities and polish our teeth so that we will never end up in this situation. They also whiten and straighten our teeth although this prevents neither toothaches nor tooth decay. Today, the demands we make of physicians (and dentists) far exceed those of average patients a hundred years ago. Today, physicians do a lot more than treating serious illness—and we expect them to do that.

Most historians of psychiatry have discerned two themes in the history of twentieth century psychiatry. First, there has been a broadening of the definition of what constitutes mental ill-health. A wide range of conditions in between mental health and severe and persistent forms of mental illness have been identified and investigated. The formerly almost absolute distinction between mental health and mental illness has been replaced by a wide spectrum of conditions, which has led to the blurring of the distinction between the normal and the abnormal. Second, conditions on this spectrum have increasingly become the target of psychiatric intervention; psychiatrists now treat a variety conditions less serious than severe and persistent forms of mental illness but definitely in need of treatment. During the twentieth century, prevention has been the most important argument to hold both themes together: treating less serious psychiatric conditions prevents these from becoming worse—because it has been assumed these conditions will inevitably become more serious over time. It was well into the twentieth century before any effective medical treatments for severe and persistent forms of mental illness were developed. Mental hospitals were severely overcrowded while little could be done for their inmates. Therapeutic nihilism reigned. Any type of intervention that promised to prevent mental illness from developing or becoming worse was therefore worth considering.

The blurring of the distinction between normal and abnormal is generally associated with Sigmund Freud: according to psychoanalysis, nobody is entirely normal, although some individuals are better in keeping their unconscious desires in check than others, thereby maintaining an appearance of mental health and normality. Despite differences in appearance, we are all to a certain extent mad. Views like these open up unexpected vistas for psychiatric attention: behind the everyday veils of normality, happiness, and adjustment hides psychopathology, lust, and perversion. Nevertheless, the blurring of the distinctions between the normal and the abnormal is not unique to psychoanalysis. The historian of psychiatry Elizabeth Lunbeck has analyzed (in The Psychiatric Persuasion, 1994) how, in the 1910s, American psychiatrists proposed psychopathy as a category to designate forms of psychopathology that had previously been unrecognized because they had been able to pass as normal. No longer would mental illness, as insanity, be limited to insane asylums, where it could be contained successfully. On the contrary, the widespread presence of psychopaths everywhere, hiding under the veil of normality, threatened the social fabric of American society. These views made psychiatric intervention even more compelling: not only would it remove sick individuals from public life, it could also protect the social order.

In my own work on the history of the mental hygiene movement, similar themes appear. In the 1920s, mental hygienists launched a major project on the treatment of juvenile delinquency to prevent children from developing life-long criminal careers. The concept of adjustment as an essential marker of mental health, central to the philosophy of mental hygiene, brought a great range of human behavior under the purview of psychiatry. Instead of treating maladjustment in adults (for example, adults with mental illness), mental hygienists argued that treating maladjustment in children (for example, children with enuresis or temper tantrums) would prevent serious forms of mental illness arising later in life. By labeling all forms of undesirable behavior as maladjustment, it became self-evident to expect relatively innocent forms of maladjustment to become serious forms of maladjustment later on. Rather than punishing delinquency, the therapeutic treatment of children with “pre-delinquent syndrome” could be expected to bear fruit. Unfortunately, the central assumptions of this approach were never put to the test, and would most certainly not hold up when investigated properly. Led by their convictions, psychiatrists and mental hygienists were not bothered by this. They focused on lesser complaints, while neglecting the plight of the mentally ill in increasingly overcrowded mental hospitals (leaving them to somatic psychiatrists who experimented with insulin therapy, metrazol shock therapy, ECT, and lobotomy).

The mistaken impression could arise that the two themes in the history of psychiatry identified thus far (blurring the distinction between normal and abnormal, and targeting less serious states for psychiatric intervention) were a characteristic of psychoanalysis or other psychiatric approaches focusing on mental and behavioral factors. It would be too easy to dismiss them because, with psychiatry becoming increasingly biological and scientific, such trends have been reversed today. Nothing could be farther from the truth, however. During the last twenty years or so, we see these trends developed in an unprecedented way in psychopharmacological psychiatry. In the 1950s, only individuals with severe and persistent forms of mental illness received medication such as Thorazine. Today, fidgety and distracted kids as well as shy adults are portrayed as individuals who could benefit from psychopharmacology. Increasingly, a wider range of psychiatric medications are prescribed to young children, with the idea that early intervention will prevent problems from getting worse. It is this mind-set, now more than a century old, that made McGorry’s research project appear innovative and cutting edge.

McGorry has introduced a slight modification to his study, which will now go ahead. Instead of Seroquel, he will now test the efficacy of fish oil.

New Issue of History of the Human Sciences

The most recent issue of History of the Human Sciences has Scott Vrecko as guest editor and is dedicated to Neuroscience, Power and Culture Contents. Two articles adresses more specifically the use of drugs inside psychiatry.

The persistence of the subjective in neuropsychopharmacology: observations of contemporary hallucinogen research by Nicolas Langlitz (Department of Anthropology, New School for Social Research). The abstract reads:

The elimination of subjectivity through brain research and the replacement of so-called ‘folk psychology’ by aneuroscientifically enlightened worldview and self-conception has been both hoped for and feared. But this cultural revolutionis still pending. Based on nine months of fieldwork on the revival of hallucinogen research since the ‘Decade of the Brain,’this paper examines how subjective experience appears as epistemic object and practical problem in a psychopharmacological laboratory. In the quest for neural correlates of (drug-induced altered states of) consciousness, introspective accounts of test subjects play a crucial role in neuroimaging studies. Firsthand knowledge of the drugs’ flamboyant effects provides researchers with a personal knowledge not communicated in scientific publications, but key to the conduct of their experiments. In many cases, the ‘psychedelic experience’ draws scientists into the field and continues to inspire their self-image and way of life. By exploring these domains the paper points to a persistence of the subjective in contemporary neuropsychopharmacology.

Profitable failure: antidepressant drugs and the triumph of flawed experiments by Linsey McGoey (Saïd Business School, University of Oxford). The abstract reads:

Drawing on an analysis of Irving Kirsch and colleagues’ controversial 2008 article in PLoS [Public Library of Science]Medicine on the efficacy of SSRI antidepressant drugs such as Prozac, I examine flaws within the methodologies of randomized controlled trials (RCTs) that have made it difficult for regulators, clinicians and patients to determine the therapeutic value of this class of drug. I then argue, drawing analogies to work by Pierre Bourdieu and Michael Power, that it is the very limitations of RCTs — their inadequacies in producing reliable evidence of clinical effects — that help to strengthen assumptions of their superiority as methodological tools. Finally, I suggest that the case of RCTs helps to explore the question of why failure is often useful in consolidating the authority of those who have presided over that failure, and why systems widely recognized to be ineffective tend to assume greater authority at the very moment when people speak of their malfunction.

%d bloggers like this: