Psychiatric diagnosis and psychiatry is in crisis. Evidence of this is to be found in the National Institute of Mental Health’s (NIMH) decision not to use the American Psychiatric Association’s DSM-5 published in May. It is also marked by the existence of DxSummit, as well as the critical responses to DSM-5 of organisations as diverse as the Hearing Voices Network[1] and Mental Health Europe[2]. This is no guild dispute between clinical psychologists and psychiatrists. Four editorials or special articles published by the British Journal of Psychiatry over the last five years (Craddock et al, 2008; Bullmore et al, 2009; Oyebode & Humphreys, 2011; Bracken et al, 2012) acknowledge that psychiatry is in crisis. These papers offer different analyses of the crisis, and different solutions, but the fact is that scientific research has failed to reveal the causes of madness, and confidence in the quality of the scientific evidence used to justify the treatment of people with psychiatric diagnoses has been seriously undermined.

Image courtesy of dream designs at FreeDigitalPhotos.net

Image courtesy of dream designs at FreeDigitalPhotos.net

There is in some quarters in psychiatry a strong belief that the failure of science to ‘crack’ the problem of psychiatric diagnosis will become a thing of the past, a temporary setback on the path to enlightenment. Science will at some unspecified future point establish psychiatry as a ‘medicine of the brain’. The greatest shortcoming of psychiatric diagnosis, that of validity, will vanish as molecular genetics and neuroscience refashion the way we think about psychosis, shaping it into as yet unthought of forms. At least these are the claims made by Craddock et al (2008) and Bullmore et al (2009). Other have abandoned the sinking ship of categorical diagnosis; the smart money has skipped past gothic categories like schizophrenia in the rush for the neuroscience of child abuse (Hart & Rubia, 2012). The sullied demimonde Charles Nemeroff was quicker than most out of the traps with a large grant from NIH to investigate the neurorobiology of PTSD[3].

Despite this, scientific evidence for the importance of contexts in understanding the experiences of people who suffer from psychosis is overwhelming, but persistently disregarded by funding bodies. These include personal histories of trauma and adversity, especially in childhood (Read et al, 2001; Read et al, 2005; Read et al, 2009, as well as other forms of oppression and abuse, such as racism and wider socio-economic contexts of inequality (Karlssen & Nazroo, 2002; Janssen et al, 2003; Karlssen et al, 2005). The nature of these contexts raises moral and ethical questions about our work. They flag up important issues about values in psychiatry. What do we really believe to be important about the way we try to help people who experience psychosis?

Given the claims that have been made for neuroscience as guarantor of the future of psychiatry it is worth remembering that the scientific search for the biological basis of madness is one hundred and fifty years old. How can we be sure that we won’t end up in the same situation a hundred years down the line? Are psychiatrists justified in pinning their hopes for the future of psychiatry on neuroscience? Can we justify the huge costs entailed in the research necessary to achieve this? What is the status of the technologies of neuroscience that feature in this task? Are there limits to this technology, and if so, what are they? The experiences of psychosis, voices, unusual beliefs, and intense distress are not properties of brains. They are the contents of consciousness of suffering individuals, and are deeply enmeshed in personal stories of lives afflicted by trauma and adversity. In view of the high expectations of neuroscience we are entitled to ask what assumptions does it make in its attempts to explain consciousness and experience, and what are the implications of these assumptions for a neuroscientific psychiatry? If there are indeed limits to neuroscience, then what are the consequences of this for the future direction of psychiatry? This article argues that three problems, statistical, methodological and conceptual, bedevil neuroscience when it comes to investigating psychosis. In turn these problems originate in more fundamental philosophical problems that neuroscience has with the relationship between brain and consciousness.

‘Seeing’ consciousness?

Developments in technology have revolutionised the practice of medicine over the last fifty years. In particular, the diagnosis and assessment of neurological disorders has benefitted greatly from new techniques that make it possible to see the brain in much greater detail than was possible through conventional skull X-rays. Magnetic resonance imaging was introduced in the early 1980s, and less than a decade later a variant made it possible to study metabolic activity in vivo, by measuring the blood flow through tissues in real time. This technology, functional magnetic resonance imaging (fMRI) has since been widely used to study brain activity, opening up a new field of study in cognitive neuroscience research. Such has been the explosion in this field that Logothetis (2008) identified over 19 thousand papers with ‘fMRI’ as a keyword published since 1991. Nearly one half of these are attempts to localise brain activity in relation to a wide range of cognitive tasks. And beyond serious scientific study there is a frenzy of media interest in the field. Astonishing claims have been made about the ability of neuroscience to explain our most human experiences, as fMRI studies have infiltrated just about every human space possible, from investigations purporting to demonstrate the neural basis of romantic love (Bartels and Zeki, 2000), the neurobiology of aesthetics in the appreciation of music (Salimpoor et al, 2013), and the neurophilosophy of free-will and criminal responsibility in criminology (Farahany, 2012). It is little surprise that some neuroscientists like Raymond Tallis (who is also a clinician) identify such cultural developments as ‘neuromania’.

The reason for the broad cultural appeal of this work is to be found in the popular belief that it enables us literally to ‘see’ consciousness, or at least the brain activity that appears to give rise to it. fMRI presents us with vivid and startling images of the brain with multi-coloured lights flashing on and off indicating activity in different areas as the brain processes information necessary for consciousness. My colleague Dirk Corstens describes this as the ‘Pinball’ view of the brain, and for many nonspecialists, the neuroscientists who create and manipulate these images must indeed seem like Pinball Wizards. The assumption is that because we can now observe brain activity directly in real time this must tell us something of fundamental importance about the neural basis of consciousness. It is very easy to be seduced into believing this. The origins of these images are shrouded in mystery for most, so they must represent a truth about the nature of consciousness. But is this really the case? There are many steps, traps and pitfalls on the path from image of pinball brain, metabolic activity in brain tissue and conscious experience. In order to appreciate these obstacles it may be helpful to be clear about the basic principles upon which fMRI is based.

When an area of the brain is active it has an increased need for oxygen and nutrients such as glucose, which it is incapable of storing. Consequently the blood supply to the area increases. The functional activity in a brain area can be detected by measuring its perfusion, the amount of blood passing through it. A number of techniques have been used to detect and measure changes in regional cerebral blood flow (or rCBF), such as positron emission tomography (PET) and more recently fMRI. This exposes the subject’s brain to a very powerful and constant magnetic field, which aligns all the nuclei in the atoms in the brain in the same direction. A second magnetic pulse is then briefly applied, which nudges the nuclei into a higher energy level. When the pulse ends the nuclei slowly return to their previous level and in doing so release a small amount of radio energy. This can be detected and measured to provide an indication of the positions of the nuclei. Oxygen rich blood has different magnetic properties from deoxygenated blood, and this is used to indicate which areas of the brain have been active metabolically, and thus responsible for the contents of consciousness under investigation. These data are then manipulated mathematically to generate a map of the brain showing the level of activity in different areas.

Over the last twenty years there has been a rapid growth in studies that have used fMRI to investigate the neural basis of voice hearing and other experiences of psychosis. In general these studies compare the level of activity in the brain ‘at rest’, that is to say when the person is sitting quietly and not hearing voices, with the activity when the person indicates that they are hearing voices. Subjects are asked to press a button as soon as they hear voices, so the technician can operate the scanner. The activity in the resting state is then ‘subtracted’ from the activity measured when the person is hearing voices. The data are handled by complex mathematical and statistical procedures, which convert differences in brain activity between the two situations into a colour-coded map, the mysterious images that feature prominently in scientific paper, news reports and documentaries. The assumption is that any difference in brain activity between the two states causes the voices heard by the person.

A recent review of brain imaging studies of people with a diagnosis of schizophrenia who were hearing voices found ‘… insufficient neuroimaging evidence to fully understand the neurobiological substrate of [auditory hallucinations].’ (Allen et al, 2012). The authors briefly acknowledge that the interpretation of these studies is complicated by a number of factors. Most involve only small numbers of subjects, making it difficult to draw firm conclusions about the relevance of the results more generally. Most fail to take into account the effects of medication. But this is to skate over the surface of the problems, and the ice is thin. These studies have three main weaknesses, statistical, methodological (or empirical), and conceptual. The statistical problems do not throw into question the basic assumptions of neuroscience, but they do cast serious doubt on their validity (see, for example, Ioannidis, 2011; Button et al, 2013). It means that although a study may report a significant relationship between brain activity and an aspect of consciousness, we cannot assume that the relationship is true and robust. It might just be a chance finding. However, the empirical and conceptual problems raised by these studies raise searching questions about the assumptions that neuroscience is forced to make about consciousness, and for this reason I will focus on them here.

Empirical problems

At issue here is the precise relationship between brain activity and contents of consciousness such as voices. How do we know that the patterns of cortical activity that are seen on fMRI scans when people hear voices actually reflect the underlying brain processes that cause the experience? This is important because the claim is that neuroscience will deliver causal accounts of experiences like voice hearing based in brain activity. The first and most obvious problem here is how can we be certain that the brain activity seen on an fMRI scan when someone hears voices is not related to some other brain activity associated with the experimental condition. For example, in these experiments subjects wait, lying down with their heads in the depths of the scanner, listening vigilantly and ready to respond by pressing the button as soon as they hear a voice. This hardly represents a state in which the brain can be assumed to be at rest. van Lutterveld et al (2013) tried to investigate these confounding factors through two meta-analyses of data, one from ten neuro-imaging studies of voice hearers and the other from eleven studies involving auditory stimulus detection tasks. In the first set of studies voice hearers indicated when exactly they were hearing voices by pressing a button. In the second set, subjects were required to press a button when they heard a non-speech target sound they had been asked to respond to. They used a variety of complex statistical procedures to compare patterns of brain activity in voice hearers and auditory detection task, and claimed to have found evidence of brain activity in over a dozen brain areas that they claimed were specific for the experience of hearing voices, and not the auditory detection task.

The empirical problem here can be posed as a question: how are we to understand the relationship between the observed brain activity, and the contents of consciousness (voices) that these studies assume to be caused by these events? The problem concerns the relationship in time and space between brain activity and conscious experience. For example, Logothetis (2008) acknowledges that there are constraints in the spatial and temporal resolution of fMRI technology that set limits to the conclusions that can be drawn from these studies. The latest technology may well have greatly improved resolution, with voxel[4] sizes some two or three orders of magnitude smaller than earlier machines, but this doesn’t make the task of interpreting the results of these studies any easier. A typical fMRI voxel contains over 5 million neurons, 2.2–5.5310 synapses, 22 kilometres of dendrites and 220 kilometres of axons. The technology creates images of the brain that consist of thousands of voxels.

The neuroscientist and philosopher Alva Noë (2009) points out that we have no idea of knowing whether, beyond current levels of discrimination, there are groups of neurons that are active or inactive in a given task or situation. The resolution of the technology is simply too blunt for us to be able to assume that that there is a one to one equivalence between experience and brain activity in a specific area. In any case, let us assume that future technological advances make it possible for us to measure brain activity in every single neuron and axon across in the entire brain simultaneously. Would this enable us to explain the relationship between brain activity and the contents of consciousness? The answer must be no; we do not understand a newspaper article by trying to read it with an electron microscope.

fMRI studies sidestep the question of the location in the brain of the physical site of the equivalence between brain activity and consciousness, or, to use the words from the Beatles’ song, where does it all come together?  Setting aside the problem of spatial resolution, there is also that of temporal resolution. Neuronal activity occurs at the level of a few milliseconds, but it can take a much longer period of time, of the order of hundreds of milliseconds, to detect and process signals for the perception of images or sounds. The interpretation of these images is further clouded by the technique of normalisation widely used in such studies to generate a statistical average of brain activity. This irons out differences between individual subjects to make it possible to pool the data mathematically so that the average activity across subjects can be projected onto an idealised brain template. The Fauvist images that fMRI presents us with are not those of a real person, but an idealised average. They don’t even have the same relationship to the activity in a real person’s brain as an identikit picture of a crime suspect has to a real person’s face.

Finally, what exactly do the colours indicate? We are told they are direct representations of brain activity, but are they? Noë  points out that they are actually based in physical measures of light (in the case of PET) or radio waves (fMRI), which in turn are assumed to represent metabolic activity in the area concerned. In reality, the final images of brain activity depicted by fMRI and PET studies are at least three levels of removal from brain activity. First, they are measures of cerebral blood flow; second, blood flow is assumed to correlate to metabolic activity; third, metabolic activity is assumed to correlate to mental activity. Stufflebeam and Bechtel (1997) make broadly similar points about PET. Each stage in the generation of these images involves transformation of the phenomena that can give rise to artefacts. In all this makes it extremely difficult to interpret the significance of the empirical findings of these studies in terms of the conscious experiences they are said to cause.

Conceptual problems

There are two issues here. The first is an assumption at the heart of neuroscience, and without which its attempts to interpret the significance of functional brain imaging studies are futile. This is the assumption that mind has a modular structure. The second is an assumption about the ‘resting state’ of the brain, which is central to the design of fMRI subtraction studies widely used in studies of voices. Modularity proposes that mind is organised along modular lines, consisting of a set of discrete functional modules that process the information or data, which give rise to consciousness. This theory (and it is a theory) is associated with the work of the philosopher Jerry Fodor (1983), and it has played a vital role in cognitive theories and artificial intelligence that attempt to explain consciousness through mathematical processes and operations on sense data using the computer as an analogy for the brain. It is clear that Logothetis (2008) makes this assumption, but if, as Van Orden and Paap (1997) argue, it is wrong to assume that mind is organised along modular lines, then it becomes impossible to know how to interpret the results of these studies.  Noë (2009) scrutinises the problem of modularity through studies of brain activity in word rhyming tasks, which compare brain activity in subjects under two experimental conditions. One involves attending to a list of words and judging whether or not they rhyme, and another in which subjects are required simply to listen to a list of words and not have to make such a judgement. Superficially this would appear to be a robust method for investigating the neural basis of making rhyming judgements. However, Noë points out that it assumes in the first instance that mind is organised along modular lines and that this modularity is instantiated by brain structures and activity that correspond the modules necessary to undertake the task. But if mind is not organised according to modular principles, then it becomes impossible to interpret the significance of the results of studies of this nature. We know that neuronal circuitry in the cortex is extremely complex, and contains many feedback loops, an observation which is very difficult to reconcile with the idea that mind operates in a modular fashion. The are many afferent pathways carrying information in to different areas in the brain, but there are even more efferent pathways carrying information back out. He writes:

The assumption that there is no feedback in the neural circuitry is the flip side of a different assumption that we can factor the cognitive act itself into distinct, modular acts of perceiving the words (on the one hand) and judgements about whether they rhyme on the other.

(Noë, 2009: 22)

The assumption of modularity is a major claim about the nature of mind and consciousness that is impossible to verify empirically, and which has a powerful influence in shaping the interpretation of fMRI studies by neuroscientists. It is an assumption too far. Its influence is clearly to be seen in the fMRI studies of voices reviewed by Allen et al (2012) and Van Lutterveld et al (2013). They interpret brain activity seen when people hear voices in terms of mind organised along modular lines. But the theory of modularity is as Noë points out, just that, an unsubstantiated theory, and we cannot assume that brain activity seen in fMRI scans corresponds to the action of brain or mind modules.

This brings us back to the problem of how do we decide exactly which areas of activity correspond to the activity specifically involved in the particular content of consciousness under investigation. Earlier, we saw that the most commonly used design is the time-integrated subtraction method. This compares the average activity when subjects are hearing voices with a control state in the same subject not hearing voices, when the brain is assumed to be in ‘at rest’. The difference between the two is assumed to correspond to the brain activity responsible for the task. But is this justifiable? Subtraction methodology widely used in studies of people who hear voices begs a basic question. How do we know when the brain is at rest? More than that, how do we know what the activity of a resting brain is really like? Furthermore, how can we be sure that the level and patterns of activity of my brain at rest are more or less identical with yours?  If we cannot be certain about the equivalence of brain resting states from one subject to another, then this makes the interpretation of the results of subtraction methodology studies all but impossible. A recent paper by Felicity Callard and colleagues (2012) raises serious questions about the assumptions made by many neuroscientists about the brain’s ‘resting’ state, or the so-called default-mode network (DMN). They point out that historical assumptions about the nature of external task-based experiments in the 1990s defined the ‘resting’ state of the brain, or the DMN, in terms of neural regions that were not included in these studies. This makes it very difficult to interpret the nature of activity in the brain observed when it is supposed to be ‘at rest’.

The gulf between neuroscience and meaning

These empirical and conceptual problems belong to a deeper set of philosophical problems that consciousness, experience, and being-in-the-world pose neuroscience. I don’t have space to explore these here, but you can read more about this in my forthcoming book Psychiatry in Context (Thomas, 2014). In a nutshell there are five elements of consciousness that neuroscience is incapable of dealing with. These are the importance of the subject of experience, and of the conscious observer; second, the importance of the world in relation to consciousness; third, the richly varied, and individual  nature of sensation in consciousness; fourth, the ‘aboutness’ of consciousness, and especially the significance of the ‘aboutness’ of memory. Finally, there is the importance of time in consciousness, and the complexity of our relationship to time. Of all these things, time is the most vital when it comes to understanding ourselves. Our complex relationship to time transforms a critique about the limitations of neuroscience in consciousness into an understanding of why neuroscience is both unable and unsuited to grasp the meaningfulness of experiences like hearing voices. We may be able to ‘see’ voices in the brain, but these are wordless, disembodied voices that are disconnected from the life in which they belong.

A future neuroscientific psychiatry, a ‘medicine of the brain’ with or without diagnosis, would strip all richness, pain and complexity out of the experience of psychosis, because it has nothing to say about these aspects of experience. Psychiatrists would become Tommy Watkins, Pinball Wizards, the deaf, dumb and blind kids who ‘…ain’t got no distractions…’ at least not from the stories and experiences of their patients. Voices may relate at one level to brain events, but we experience them irreducibly as parts of our lives. They are bound to the events and circumstances that we experience in the world, and through the complexity of memory and our ever shifting experience of time, they become alive in the present. They are also bound in our lives to contexts, experiences that are private and frequently unspeakable, the most intimate experiences of pain and suffering, of trauma and abuse. This aspect of these experiences demands that psychiatrists, psychologists and all others whose job it is to help people who experience profound states of psychosis, alienation and distress, engage with concern for the person in such a situation. In a future blog I will show how this is possible by setting out a philosophical justification for the use of narrative in psychiatric practice.

References

Allen, P. , Modinos, G., Hubl, D., , Shields, G.,  Cachia, A., Jardri, R., Thomas, P.

Woodward, T., Shotbolt, P., Plaze, M., and Hoffman, R. (2012) Neuroimaging Auditory Hallucinations in Schizophrenia: From Neuroanatomy to Neurochemistry and Beyond Schizophrenia Bulletin 38, 695–703, 2012 doi:10.1093/schbul/sbs066

Bartels, A. & Zeki, S. (2000) The neural basis of romantic love. NeuroReport, 11, 3829 – 3834

Bracken, P., Thomas, P., Timimi, S., Asen, E. Behr, G. et al (2012) Psychiatry beyond the current paradigm. British Journal of Psychiatry, 201:430-434 DOI:10.1192/bjp.bp.112.109447

Bullmore, E., Fletcher, P. & Jones, P. (2009) Why psychiatry can’t afford to be neurophobic. British Journal of Psychiatry 194, 293–295. doi:10.1192/bjp.bp.108.058479

Button, K., Ionnadis, J., Mokrysz, C., Nosek, B., Flint, J, Robinson, E. &  Munafò, M. (2013) Power failure: why small sample size undermines the reliability of neuroscience

Nature Reviews Neuroscience | AOP, published online 10 April 2013; doi:10.1038/nrn3475

Callard, F., Smallwood, J.  and Margulies, D. (2012) Default positions: how neuroscience’s historical legacy has hampered investigation of the resting mind. Frontiers in Psychology 3, 1-6 doi: 10.3389/fpsyg.2012.00321

Craddock, N., Antebi, D., Attenburrow, M-J., Bailey, A., Carson, A. et al (2008) Wake-up call for British psychiatry. British Journal of Psychiatry 193, 6–9. doi: 10.1192/bjp.bp.108.053561

David, S., Ware, J., Chu, I., Loftus, P., Fusar-Poli, P., Radua, J., Munafo, M., Ioannidis, J. (2013) Potential Reporting Bias in fMRI Studies of the Brain PLOS ONE Accessed on 3rd December 2013 at http://www.plosone.org/article/fetchObject.action?uri=info%3Adoi%2F10.1371%2Fjournal.pone.0070104&representation=PDF

Farahany, N. (2012) A Neurological Foundation for Freedom. Stanford Technology Law Review 4 accessed on 29th November 2013 at http://stlr.stanford.edu/pdf/farahany-neurological-foundation.pdf

Fodor, J. (1983) The Modularity of Mind. Cambridge, MA; MIR Press.

Hart. H. & Rubia, K. (2012 Neuroimaging of child abuse: a critical review Frontiers in Human Neuroscience 6, Article 52 doi: 10.3389/fnhum.2012.00052

Ioannidis, J. (2011) Excess significance bias in the literature on brain volume abnormalities. Archives of General Psychiatry 68, 773–780.

Janssen, I., Hanssen, M., Bak, R., Bijl, V, De Graaf, R., Vollebergh, W., McKenzie, K. & Van Os, J. (2003) Discrimination and delusional ideation British Journal of Psychiatry, 182, 71 – 7 6.

Karlsen, S. & Nazroo, J. (2002) Relation Between Racial Discrimination, Social Class, and Health Among Ethnic Minority Groups American Journal of Public Health 92, 624 – 631.

Karlsen, S. & Nazroo, J., McKenzie, K., Bhui, K. & Weich, S. (2005) Racism, psychosis and common mental disorder among ethnic minority groups in England. Psychological Medicine, 35, 1795–1803. doi:10.1017/S0033291705005830

Logothetis, N. (2008) What we can do and what we cannot do with fMRI Nature, 453, 869 – 878. doi:10.1038/nature06976

Noë, A. (2009) Out of our Heads: Why you are not your brain, and other lessons from the biology of consciousness. New York, Hill and Wang.

Oyebode, F. & Humphreys, M. (2011) The Future of Psychiatry British Journal of Psychiatry. 199:439-440. doi:10.1192/bjp.bp.111.092338

Read J, Perry BD, Moskowitz A & Connolly J. (2001)The contribution of early traumatic events to schizophrenia in some patients: A traumagenic neurodevelopmental model. Psychiatry;64:319-45.

Read, J., van Os, J., Morrison, A., Ross, C. (2005) Childhood trauma, psychosis and schizophrenia: a literature review with theoretical and clinical implications. Acta Psychiatrica Scandinavica, 112: 330–350, DOI: 10.1111/j.1600-0447.2005.00634.x

Read, J, Bentall, R. & Fosse, R. (2009) Time to abandon the bio-bio-bio model of psychosis: Exploring the epigenetic and psychological mechanisms by which adverse life events lead to psychotic symptoms. Epidemiologia e Psichiatria Sociale, 18, 4, 299-310

Salimpoor, V., van den Bosch, I., Kovacevic, N., McIntosh, A., Dagher, A., and Zatorre, R. (2013) Interactions Between the Nucleus Accumbens and Auditory Cortices Predict Music Reward Value. Science Vol. 340 6129 pp. 216-219 DOI:10.1126/science.1231059

Stufflebeam, R.  and Bechtel, W. (1997)  PET: Exploring the Myth and the Method Philosophy of Science, Supplement. Proceedings of the 1996 Biennial Meetings of the Philosophy of Science Association. 64, Part II: Symposia Papers (Dec., 1997), pp. S95-S106 accessed on 2nd December 2013 at  http://www.jstor.org/stable/188393

Thomas, P. (2014) Psychiatry in Context: Experience, Meaning and Communities. Forthcoming June 2014 Ross-on-Wye, PCCS Books.

van Lutterveld, R., Diederen, K., Koops, S., Begemann, M. & Sommer, I.  (2013) The influence of stimulus detection on activation patterns during auditory hallucinations,

Schizophrenia Research (2013), 145, 27-32 http://dx.doi.org/10.1016/j.schres.2013.01.004

Van Orden, G. & Paap, K. (1997) Functional Neuroimages Fail to Discover Piece of Mind in the Parts of the Brain Philosophy of Science, Proceedings of the 1996 Biennial Meetings of the Philosophy of Science Association. 64, Part II: Symposia Papers (Dec., 1997) pp. S85-S94. 0031-8248/97/64supp-0008$0.00



[4] A voxel is the specification of a point in space widely used in imaging studies. Although not necessarily three-dimensional in itself, they are used to construct three-dimensional maps of organs like the brain.

Philip Thomas

About Philip Thomas

After working as a full-time consultant psychiatrist in the NHS for over twenty years, Philip Thomas left clinical practice in 2004 to write. He has published over 100 scholarly papers, and works in alliance with survivors of psychiatry, service users and community groups, nationally and internationally. He is a founder member and co-chair of the Critical Psychiatry Network. His first book, Dialectics of Schizophrenia was published by Free Association books in 1997, and he has co-authored two other books, Voices of Reason Voices of Insnanity with Ivan Leudar, and most recently Postpsychiatry, with Pat Bracken. Until recently he was professor of philosophy, diversity and mental health in the University of Central Lancashire, and is now an honorary visiting professor in the Department of Social Sciences and Humanities in the University of Bradford. He is currently working on two books, one about critical psychiatry and another about madness, meaning and culture.