Reblog: “I fell into this by accident”

I planned my career, but not all went exactly as I envisioned. Opportunities emerged, unexpected findings were too interesting not to pursue, new studies arose from old ones, new collaborators came up with exciting suggestions, and pints with colleagues often turned into interesting, half-crazy ideas that somehow worked. As a result, I have worked across several fields (at the peril of not being able to self-cite very often), and gained a great deal of unexpected experience. It has worked out pretty well, if I do say so myself. Sheffield University’s Think Ahead research support group recently published a short blog post on Krumboltz, Levin and Mitchell’s ‘Planned Happenstance’ theory, which resonated with me. This focuses on making the best of everyday opportunities to succeed and contains some valuable suggestions on how to do this. The Think Ahead post is a neat overview of the theory and worth a read.

Career planning is often presented as applying a logical, step by step approach following a linear pattern to develop your career but the reality is that life in general and our career thinking more specifically often don’t work that way. We have a huge number of options open to us and deciding what is the […]

Read the rest of “I fell into this by accident” at the Think Ahead Blog

Reblog: How do we know what we know?

“All hypotheses emerge from assumptions, whether we recognize them or not.”

Ambika Kamath, a graduate student at Harvard University working on lizards and how their habitat, behaviour, and morphology influence eachother, has written an excellent blog post in the wake of a human dimorphism debate. The post is not about human dimorphism, but instead highlights how our assumptions can shape experimental design and therefore results. It can be easy to accept oft-cited facts without critical thought, particularly if they are in line with personal opinion.

Ambika Kamath’s post is reblogged below, and I encourage you to read it.

Ambika Kamath

Over the last few months, there’s been a slow-boiling battle underway between Holly Dunsworth and Jerry Coyne about the evolution of sexual dimorphism in humans, surrounding the question of why male and female humans, on average, differ in size. The battlefield ranged from blogposts to twitter to magazine articles. In a nutshell, Coyne argued that “sexual dimorphism for body size (difference between men and women) in humans is most likely explained by sexual selection” because “males compete for females, and greater size and strength give males an advantage.” His whole argument was motivated by this notion that certain Leftists ignore facts about the biology of sex differences because of their ideological fears, and are therefore being unscientific.

Dunsworth’s response to Coyne’s position was that “it’s not that Jerry Coyne’s facts aren’t necessarily facts, or whatever. It’s that this point of view is too simple and is obviously biased toward some stories, ignoring others. And…

View original post 1,359 more words

MRI and k-space

MRI images are created from raw data contained in a raw data space, called k-space. This is a matrix where MR signals are stored throughout the scan. K-space is considered a bit of a tricky topic, so I will only outline a brief explanation of what k-space is and how it relates to the MR image.

Kspace.pngThis is a visualisation of k-space.

K-space is what? The first thing to recognise is k-space is not a real space. It is a mathematical construct with a fancy name – a matrix used to store data. The data points are not simple numbers, they are spatial frequencies. Unlike ‘normal’ frequencies, which are repetitions per time, spatial frequencies are repetitions per distance. They are, in other words, waves in real space, like sound waves. We usually measure them as cycles (or line pairs) per mm. The number of cycles per mm is called the wavenumber, and the symbol used for wavenumber is ‘k’ – hence k-space.

It may be easiest to envision these spatial frequencies as variation in the brightness of the image (or variation in signal spatial distribution, if we are to get more technical). So we have a large matrix of brightness variations, which together makes up the complex image. The matrix is full when the image is completed (i.e. the scan is done).

kspace.jpg

K-space relates to the image how? The second thing to recognise is that k-space corresponds (although not directly) to the image. If the image size is 256 by 256, then the k-space will have 256 columns and 256 rows. This doesn’t mean, however, that the bottom right hand k-space spatial frequency holds the information for the bottom right hand image pixel. Each spatial frequency in k-space contains information about the entire final image. In short, the brightness of any given ‘spot’ in k-space indicates the amount that that particular spatial frequency contributes to the image. K-space is typically filled line by line (but we can also fill it in other ways).

Frequencies across k-space. Typically, the edges of k-space has high spatial frequency information compared to the centre. Higher spatial frequency gives us better resolution, and lower spatial frequencies better contrast information. Think of it like this: when you have abrupt changes in the image, you also get abrupt variations in brightness, which means high spatial frequencies. No abrupt changes means lower spatial frequencies. This in effect means that the middle of k-space contain one type of information about the image (contrast), and the edges contain another type of information (resolution/detail). If you reconstruct only the middle of k-space, you get all the contrast/signal (but no edges) – like a watercolour – and if you reconstruct only the edges of k-space, you get all edges (but no contrast) – like a line drawing. Put them together, however, we get all the information needed to create an image.

watercolour-1578069_960_720.jpg

Transformation: Notes and chords. The route from k-space to image is via a Fourier transform. The idea behind a Fourier transform is that any waveform signal can be split up in a series of components with different frequencies. A common analogy is the splitting up of a musical chord into the frequencies of its notes. Like notes, every value in k-space represents a wave of some frequency, amplitude and phase. All the separate bits of raw signal held in k-space (our notes) together can be transformed into the final MRI image (our chord, or perhaps better: the full tune). One important aspect of the Fourier transform is that it is not dependent on a ‘full’ k-space. We may leave out a few notes and still get the gist of the music, so to speak.

Cutting the k-space. We have types of scan that only collect parts of k-space. These are fast imaging sequences, but they have less signal to noise. Less signal to noise means that the amount of signal from the object being measured is reduced compared to the noise that we encounter when we image the object. We always get a certain amount of noise when imaging: from the scanner, the environment and sometimes also the object being imaged. A low signal to noise ratio is typically bad for image quality. Nevertheless, we can get away with collecting fewer lines in k-space when we image something relatively symmetrical, like a human brain. This allows us to mathematically fill in the missing lines in k-spaces, from an assumption of symmetry. Some fast scan sequences sample only every second line in k-space, and we can still reconstruct the image from the information gathered.

The problem of motion. If there was any motion during the scan, there will be inconsistencies in k-space. The scanner does not ‘know’ that the object moved, so it tries to reconstruct the image from data that is not consistent. This means that the reconstruction via the Fourier transform will be flawed, and we can get signal allocated to the wrong place, signal replicated where it should not be or signal missing altogether. We see this as distortions or ‘artefacts’ in the final image. As the data is collected over time, the longer the collection time, the more vulnerable the scan is to motion. Data collected in the phase encoding direction (which takes a relatively long time, see my previous post on gradients) is therefore more vulnerable than data collected in the frequency encoding direction. In a later blog post, I will discuss how we can use motion measurements to adjust the spatial frequencies k-space in a way that removes the effect of the movement. This is partly possible due to us not needing the full k-space to do a Fourier transform and get the image, as described above.

Summary. The take home message on k-space is as follows:

  1. K-space is a matrix for data storage with a fancy name.
  2. It holds the spatial frequencies of the entire scanned object (i.e. the variations in brightness for the object).
  3. We can reconstruct the image from this signal using a Fourier transform.
  4. The reconstruction can still work if we sample only parts of k-space, meaning that we can get images in much shorter time.
  5. Problems during data collection (i.e. motion) may cause errors in k-space that appears as ‘artefacts’ in the image after the Fourier transform.

 

Caveat. There is more to k-space than what I have described above, as a medical physicist would tell you. While it is good to have an appreciation of what k-space is and is not, a detailed understanding is generally not needed for those of us simply using MRI in our work rather than working on MRI. This post is meant as a brief(ish) introduction, not a detailed explanation, and contains some simplifications.

MRI: location, location, location

Say you want to give an orange an MRI scan. You pop it in the scanner, apply your radiofrequency pulse, and receive the signal. This signal will differ depending on which part of the orange it comes from, for example whether it is the flesh or the peel. But how would you be able to tell where the different types of signal come from? How would you tell that the peel signal is coming from the surface and the flesh signal from inside? In my previous post, I outlined what happens during an MRI scan. Here, I will (briefly) explain how we can tell which signal comes from which place in the object that we are scanning. This has to do with gradients.

Gradients. Gradients are additional magnetic fields that we apply to specify the location that we want to image. These fields vary slightly from the main magnetic field. The resulting overall field (the main field plus the gradients) now contains variation. Specifically, it changes linearly with distance. Protons will experience different fields depending on where they are in the object, and as a result, they have different frequencies. In other words, we get a different signal from the protons depending on where they are in the object that we image.

We have two types of gradients that we employ: frequency-encoding gradients and phase-encoding gradients.

Frequency encoding is quite simple. We take our original field (B0), add a gradient field (Gf), and we get a new field (Bx) for any given point (x) in space. Like this: Bx = B0 + xGf

The spin frequency of the protons depends on their position along this new field. The frequency at point x is the sum of the frequency induced by the main field, and that induced by the new gradient. We can now differentiate between signal from different parts of the brain to some extent (see image below).

kspace2When we add a frequency encoding gradient (blue) to our main field (green), we can differentiate between signal along this gradient axis (blue arrow). We cannot tell the difference between signal along the other axis (red arrow).

To complete the picture, we need to add phase encoding gradients. We do this by applying another gradient designed to make signal fall out of sync along the phase encoding direction (red arrow axis in the image above). Although the frequency of the signal remains the same, the new gradient will cause it to have a different phase depending on where it comes from (along the red arrow axis). Think of it as a long line of cars waiting at a red light, all in sync. The light turns green, and the first car starts, then the second, then the third, and so on. Now, the cars are out of sync: some are standing still, others are slowly moving, and the first cars are moving at full speed. Also, at the same time, the cars at the back of the queue have become tired of waiting and start reversing (first the one all the way at the back, then the next one, and so on). The full speed profile of the row of cars suddenly has a great deal of variation, with some going forward at different speeds and some going backward at different speeds. In terms of MRI, the shift in phase causes the sum of the frequencies (the sum of the car speeds) along the red arrow axis to vary, and we can use this variation to figure out the individual signals (or individual cars) along the axis.

Through these frequency and phase encoding steps, we can figure out the strength of the signal and location of signal along both axes.

Frequency vs Phase Encoding. An important difference between frequency encoding and phase encoding is speed. Frequency encoding can be done with a single radiofrequency pulse (see my previous blog post on what happens during an MRI). We apply a gradient and a pulse, and all the frequencies can be read all at once. It is fast. For phase encoding, we need to apply a radiofrequency pulse for each phase change. We apply our pulse and then the gradient, let the phase change happen, measure the change, and then we have to apply another gradient and phase change. For a 256 pixel image, we will have to do this 256 times, each taking as long as one TR (repetition time). It is slow. A consequence of this is that the phase encoding is more vulnerable to movement than the frequency encoding.

In my next blog post, I will describe how we store the MRI signal during the scan and how the signal can be transformed into an image.

What happens during an MRI?

MRI is complex, but the basic events in the scanner are quite straight-forward. Below is a short, simple guide to what happens during an MRI scan without too much physics to complicate matters. It explains the actual events in the scanner and a simplified overview of the parameters we can use to change the images we collect.

technologies-05-00006-g001

  1. First we align the protons in the object we want to scan with the magnetic field of the scanner (B0). This happens naturally when the object is placed into the scanner, as the slightly magnetic protons adjust themselves to match the magnetic field of the scanner.
  2. Second, we use a radio frequency (RF) pulse to ‘tip’ these aligned protons out of their alignment with the magnetic field. This pulse is sometimes called B1 and is applied at a 90° angle to B0. This causes the protons to rotate, and we can measure this rotation with RF measurement coils. RF coils are essentially loops of wire, and the changing magnetic flux of the protons induces an electric current through these loops. This is because changes in electrical currents generate magnetic fields, and changes in magnetic fields generate electrical currents. RF coils may often be designed to both deliver the RF pulse (through an applied electrical current) and receive the signal (through the resulting change in magnetic flux). During the rotation, two things happen.
    1. Protons begin to align themselves again with B0. The speed of this realignment is called the T1 relaxation time. For any given type of object (or tissue, if we are doing medical imaging), the composition of the object will cause its protons to realign at different rates. Faster realignment means brighter signal.
    2. Protons become out of phase with eachother. This reduces the signal we can measure with our coil (as the rotation of the photons are no longer ‘pulling in the same direction’). The speed at which this happens is called T2 relaxation time. Protons remaining in phase for longer means brighter signal.
  3. The RF pulse is applied again, to repeat the procedure. The average of all these repeats gives us a clear MR image.

The contrasts of the scan (T1, T2) are determined by two parameters: relaxation time (TR) and echo time (TE).

TR is the time between the RF pulses. If we have a long TR, all protons in the object have time to realign with B0. If we have a short TR, some protons may not have fully realigned by the time the next RF pulse arrives. In terms of medical imaging, some tissues will need longer to have all their protons realign than other tissues. If the ‘slow’ tissues have not realigned within the TR, the signal from these tissues will be less than the ‘fast’ tissues. This way, we can tell the difference between different tissues.

TE is the time we use to measure the signal induced by the rotating protons. Some types of tissue will have protons that fall out of phase (‘dephase’) faster than protons in other types of tissue. For example, protons that are in fluids have less obstacles, and will remain in phase for quite a long time. Protons that are constrained by structures may not remain in phase that long. Longer TEs means that the protons have more time to dephase, and this will reduce the signal from tissues that dephase quickly more than from tissues that dephase slowly.

Generally speaking, we have three types of contrast: T1-weighted, T2-weighted and proton-density (PD) weighted

A scan sequence with a short TR and short TE is usually called T1-weighted. By ‘a short TE’ we usually mean that the TE is shorter than the T2. In other words, there is not enough time for the protons dephasing properly, and the T2 effects are masked. The shorter TR, on the other hand, means we can easily differentiate between tissues with longer and shorter T1. The scan is therefore T1-weighted. Tissues that are bright in T1-weighted scans are fat and white brain matter. Muscle and grey brain matter are less bright (grey in colour), and fluids tend to be black.

A long TR and long TE scan sequence is usually called T2-weighted. Longer TE means that we get differentiation based on protons dephasing at different rates, and the T2 effects are visible. The longer TR, however, means all tissues have time to have all their protons realigned to B0, so we get no differentiation based on T1 times. The scan is therefore T2-weighted. Tissues that are bright in T2-weighted scans are fat and fluids. Muscle and grey brain matter are grey, and white brain matter is almost black.

A long TR and a short TE means we get both T1 differentiation and T2 differentiation, and we call this proton-density weighted. This gives us the actual density of protons in the tissues. A short TR and long TE means we get neither T1 nor T2 differentiation. We don’t use this type of scan, as it doesn’t yield any useful information.

T1t2PD.jpg

In this post, I have summarised some of the basics about MR imaging. In my next post, I will move on to outline some of the basics about raw MR data processing, covering k-space and Fourier transforms.

Reblog: The Firefly Scanner is Featured on the BBC

Below is a blog post I wrote for our group after the Firefly neonatal scanner was highlighted on BBC. It’s a great, little scanner and I’m thrilled to be involved with this research. (Link to the post on our group’s blog.)

Building on previous work, Professor Martyn Paley has developed the concept of a bespoke MRI scanner for newborn babies (neonates) along with Professor Paul Griffiths. The result is a unique, full-strength neonate scanner, built by GE Healthcare and installed in the Neonatal Intensive Care Unit in the Jessop Wing. Named ‘Firefly’, the scanner is one of only two such prototype scanners in the world, and uniquely marries diagnostic imaging with easy access to our neonatal unit.

Featured last week on the BBC, the Firefly scanner has gained deserved attention. The BBC’s video of the scanner in action shows how important it is for the healthcare of newborn babies to have powerful scanning facilities within quick and easy reach. Tiny compared to adult scanners (which can easily weigh several tons), the Firefly would be able to fit in many small Neonatal Intensive Care Units. This is a major advantage over the more commonly used ultrasound imaging in providing ready access to high quality brain imaging.

Babies can be difficult to image as they rarely stay still. Our group have also recently published one of the first research papers with data collected using the Firefly scanner, in which we discuss a potential new way of correcting motion during MRI in babies. The paper is titled “Wireless Accelerometer for Neonatal MRI Motion Artifact Correction” and freely available.

We are delighted to have the Firefly scanner in Sheffield. It is an important clinical development and opens up exciting new possibilities for linking research on reproduction and development with the health of newborn babies.

technologies-05-00006-g001The Firefly 3-Tesla neonate MRI scanner in the Jessop Wing. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

Reblog: Success in academia involves a lot of failure

Eric Weiskott, an Associate Professor of English at Boston College, has written a CV of his failures here. Whilst it is not often published, I don’t know a single academic or researcher who doesn’t have a similar list. I certainly do. There are rejected applications, failed experiments, unpublished papers and unfunded grants. The list is long. Incidentally, most researchers I’ve met have all echoed the same sentiment: the key to success is try and try again. The best researchers also add that it’s important to not let rejection bog you down. A few years ago, I attended an early career talk by Russell Foster, a neuroscientist who discovered photosensitive ganglion cells in the retina and by most measures a successful scientist. Based on his own experience, he listed the approximate number of applications required to land funding. As a young researcher, it was a sobering talk. However, it was also oddly encouraging to be told that as long as you do good work, it’s probably only a numbers game.

Prof Weiskott’s post is reblogged below, and it is definitely worth a read:

Eric Weiskott

When I think about my career so far, I’m humbled by the generosity of friends and colleagues. I’m also acutely aware of the odds stacked against anyone who tries to enter this profession. My own success, such as it is, was the direct result of a lot of failure. Maybe there is someone out there who succeeds in academia without failing. I am not that person. I want to talk about my experience in the hope that it smashes a few unhelpful myths about academia, publishing, and job-seeking. This is my version of a CV of failures.

Failing to get into grad school

As a senior in college, I applied to MPhil and PhD programs. Most of them rejected me. Programs that rejected me were Brown University, Harvard University, the Marshall Scholarship, Stanford University, University of Connecticut, University of Michigan, and University of Oxford. New York University and the University of Virginia waitlisted me. The University of Cambridge accepted me…

View original post 1,116 more words

Breathlessness and opioids

We’ve recently published a paper on how opioids can modulate breathlessness. (The whole manuscript is open access here). Low-dose opioids can be used for treating chronic breathlessness, but we don’t know exactly how they work.

Opioid receptors exist across the brain. These are part of the internal opioid system (endogenous opioid system) for natural pain relief. When opioids are used in the clinical setting to treat negative stimuli, such as pain, they influence the unpleasantness and the intensity of the stimulus in different ways(1). In terms of breathing, opioids lower breathing by influencing brainstem respiratory centres(2), causing breathing to stop completely in high doses, and they can also affect higher brain centres(3).

Opioids also have behavioural effects, and may, amongst other things, influence associative learning. Associative learning is when an association between two stimuli is learnt by pairing these together. For example, in chronic breathlessness this could be previously neutral stimuli (e.g. a flight of stairs) and breathlessness. This could create an anticipatory threat response, which means that simply seeing a flight of stairs is enough to bring about breathlessness or the fear associated with breathlessness. This can worsen the breathlessness for the patient in the long run.

In this study, we hypothesized that opioids improve breathlessness in part through changing the anticipatory response to breathlessness. We focused on two brain regions in particular: the amygdala and hippocampus. Both are strongly involved in associative learning(4) and are rich in opioid receptors(5,6).

What did we do? First, we asked healthy volunteers to do a breathing test where we paired three different degrees of breathlessness and three symbols. The symbols were presented immediately before the volunteer was made breathless and were matched to the level of breathlessness that they signified. We invited those volunteers who learnt to associate each different symbol and its corresponding breathlessness to undertake two MRI scans in random order. Before one scan they received a low-level opioid (remifentanil) infusion, and before the other they received a control (saline) infusion. During the scans, the volunteers repeated the breathlessness/symbol task. Below is a schematic of the breathing circuit used for the study, showing how different levels of breathlessness was induced.

breathingsystemFigure 1. Breathing circuit for inducing breathlessness

What did we find? We were able to show that breathlessness anticipation in the control condition was processed in the right anterior insula and operculum, and that the breathlessness itself was processed in the insula, operculum, dorsolateral prefrontal cortex, anterior cingulate cortex, primary sensory cortices and motor cortices. These regions have been identified in other studies on breathlessness (as discussed in a previous blogpost).

However, in the opioid condition, we saw the following:
1. Opioids reduced breathlessness unpleasantness (Figure 2)
2. This reduction correlated with reduced activity in the amygdala and hippocampus during anticipation of breathlessness (Figure 3)
3. This reduction also correlated with increased activity in the anterior cingulate cortex and nucleus accumbens during the actual breathlessness (Figure 3)
4. During the actual breathlessness, the opioid infusion directly reduced activity in the anterior insula, anterior cingulate cortex and sensory motor cortices (Figure 4).

opioid_res1Figure 2. Ratings of breathlessness and intensity. Abbreviation: Remi=remifentanil

Reduction in unpleasantness. The reduction in unpleasantness with opioids was expected – the different effect of opioids on intensity and unpleasantness has been shown in many other negative conditions, including pain(1). Interestingly, we could confirm that this lowered unpleasantness correlated with reduced activity in brain regions linked with associative learning and memory (amygdala and hippocampus) before the breathlessness began (Figure 3, bottom). This reduced activation in the amygdala and hippocampus, regions that are needed for formation of unpleasant memories, may explain how low-dose opioids gradually become more efficient as a therapy over the first week of administration. The reduction in amygdala/hippocampus activation may mean that fewer new negative memories and reaction patterns are formed.

opioid_res2.jpgFigure 3. Brain activity linked to lowered unpleasantness, relating to breathlessness (top) and anticipation of breathlessness (bottom). NA=nucleus accumbens, paraCC=paracingulate cortex, ACC=anterior cingulate cortex, PC=precuneus, ant hipp=anterior hippocampus, amyg=amygdala. 

We also see that the reduced unpleasantness correlates with activation during the actual breathlessness in the anterior cingulate cortex and nucleus accumbens (Figure 3, top). These are parts of the endogenous opioid system which reduces the perception of negative stimuli. This means that the reduced unpleasantness our volunteers felt is linked with these regions being more active. In other words, they may act to further dampen the negative sensation. Less unpleasantness during the breathlessness means that even less negative memories are likely to be formed.

Reduction in breathlessness activation. Finally, opioids directly reduced activity in the anterior insula, anterior cingulate cortex, sensory motor cortices and brainstem (Figure 4). The activation during control (saline) in the figure below is typical of breathlessness, and has been found in several other studies using breathing challenges.

opioid_res3.jpg

Figure 4. Brain activity during breathlessness in the control (saline) condition (top) and where it is reduced by the opioid (remifentanil, bottom). Increased activation is shown in red-yellow, and decreased in blue. M1/S1=primary motor & sensory cortices, OP=operculum, dlPFC=dorsolateral prefrontal cortex, Thal=thalamus, ACC=anterior cingulate cortex, vmPFC=ventromedial prefrontal cortex, PAG=periaqueductal grey, SMG=supramarginal gyrus.

The areas that are reduced in activation with opioids are commonly activated in breathlessness and central to respiratory sensation. For example, the anterior insula, the most commonly activated brain region during breathlessness, is believed to assess the quality of the stimulus and help control interpretation. The anterior cingulate cortex, which is also commonly activated in breathlessness, is similarly involved in control of negative emotions. These regions may be part of an interpretation process that is shaped by expectation and learning(7) similar to other control systems in the body (e.g.(8)).

Summary: Opioids manipulate brain regions associated with learning, negative memory formation and negative stimulus control. We have shown that the opioid remifentanil may alter breathlessness perception and the brain regions associated both with anticipation of and actual breathlessness. This suggests that opioids work to reduce breathlessness in part through direct effects on respiratory control mechanisms in the brainstem, insula and anterior cingulate cortex, and in part through changes in how breathlessness is anticipated, by changing associative learning processes in the amygdala/hippocampus.

References:
1. Pain, 22 (1985), pp. 261–269
2. Br J Anaesth, 100 (2008), pp. 747–758
3. J Neurosci, 29 (2009), pp. 8177–8186
4. Curr Opin Neurobiol, 14 (2004), pp. 198–202
5. Life Sci, 83 (2008), pp. 644–650
6. Pain, 96 (2002), pp. 153–162
7. Nat Rev Neurosci, 16 (2015), pp. 419-429
8. Exp Physiol, 92 (2007), pp. 695-704

DOI: http://dx.doi.org/10.1016/j.neuroimage.2017.01.005
Link: https://www.ncbi.nlm.nih.gov/pubmed?term=10.1016%2Fj.neuroimage.2017.01.005
All images presented with permission, creative commons licence.

Breathlessness & the Brain

Breathlessness can be many things. For example, it can be the shortness of breath after exercise – short-lived and laced with endorphins – or it can be the frightening gasping for breath experienced by patients with a range of diseases from cardiac failure and cancer to respiratory disease. From a physiological point of view, these may look quite similar, but they are not really the same thing. Breathlessness is the sensation produced by sensory input evaluated within the context of psychological and environmental factors. In short, the quality of breathlessness depends on why, who and where it is experienced – it’s situational and subjective. This means it cannot be approached, from a medical point of view, with a one size fits all solution. It’s important to understand its nuances.

So, which parts of the brain are involved in processing breathlessness? The answer is quite simple: we don’t know yet. To date, studies are few and far between, with the majority being on breathlessness in healthy controls which may or may not translate to various patient groups. While we don’t know anything for certain, particularly for patients, we do however have a few likely suspects.

usual

The usual suspects. The first is the insular cortex. This is a region of the brain that has been identified in the majority of neuroimaging studies on breathlessness. It plays a role in the conscious awareness of body state and is involved in the perception of other unpleasant sensations, such as pain. The role of the insula is two-fold, with the posterior (towards the back) part processing the physical aspects and the anterior (towards the front) part processing the affective (relating to moods and feelings) aspects of the sensation 1. This anterior-posterior division may also be present in breathlessness, as the anterior insula appears to be more associated with the unpleasantness of breathlessness.

The next on the list of likely suspects are somatosensory and motor regions (e.g. 2), which are also activated in a number of breathlessness studies. This is not surprising, given the added work of breathing harder than normal when you’re breathless. In addition to cortical somatosensory/motor regions, we also see activation in the brainstem (particularly in the brainstem respiratory centres) and in the cerebellum (a region associated with motor function). Some of these areas may be mostly involved with breathing itself and not with breathlessness specifically, as they appear frequently in studies looking at simple breathing responses without any breathlessness.

Then there are regions that sometimes appear and sometimes don’t: the prefrontal cortex, the periaqueductal grey (PAG), the amygdala and the anterior cingulate cortex (ACC). In the case of the PAG, which is a structure in the midbrain involved in pain processing and fight-or-flight responses, this may simply be because it is small and hard to image. Recent studies using high resolution imaging have found that the PAG is involved with unpleasant respiratory sensations, and that parts of the PAG have different roles in processing respiratory threat (see Figure 1 below) 3. Looking further into its sub-divisions, the lateral PAG is downregulated during restricted breathing, and the ventrolateral PAG is upregulated. This is interesting as the lateral PAG has been associated with active coping strategies for stressful situations that can be escaped, and the ventrolateral PAG with passive coping strategies for stressful situations that cannot be escaped. In short, it seems as if different parts of the PAG are involved in different aspects of breathlessness.

elife-12047-fig2-v3-480wFig 1. 7T FMRI of respiratory threat in the PAG. Faull et al. 2016.

The prefrontal cortex has been implicated in the processing of breathlessness cues (i.e. a non-physical stimulus) in COPD patients (medial prefrontal cortex, mPFC), as has the ACC (see Figure 2 below) 4. The latter has also been identified in many studies on breathlessness in healthy volunteers, and the former is an area of the prefrontal cortex involved in emotion processing, and particularly associated with responses to fear and threat. In the above study, both patients and controls showed activation in the insula (labelled ‘Conjunction’). Opioids, which can cause make breathlessness less unpleasant, dampens brain responses to breath holding in the prefrontal and anterior cingulate cortices as well as in the insula 5. So, it’s a good guess that the prefrontal cortex and ACC are involved in the modulation of breathlessness.

chestpaperFig 2. COPD patients and control. Response to breathlessness cues. From Herigstad et al. 2015.

Similarly, the amygdala, which is frequently identified in a whole range of studies relating to emotion, threat appraisal and fight-or-flight responses, is also activated in some studies of breathlessness. The amygdala is strongly connected to the anterior insula, and breathlessness studies that have seen activation in the amygdala also show large activation in the insula.

There are also another few regions that are occasionally found and we don’t quite know how they fit in. These include the before-mentioned cerebellum, the dorsolateral prefrontal cortex and the precuneus.

The cerebellum is probably mostly involved in the motoric response to breathlessness, as it is a centre for coordination of motor function, but as the cerebellum has also been associated with cognitive function, we can’t yet determine its role in breathlessness. The dorsolateral prefrontal cortex may be part of the cognitive evaluation of breathlessness, as this is a structure that is associated with attention and working memory. The precuneus could be involved in both sensory and cognitive aspects of breathlessness, depending on which part of the precuneus is active. It is a poorly mapped structure, but in general it has a sensorimotor anterior region which links to sensory/motor areas of the brain and the insula, and a cognitive central region which links to prefrontal regions, including the dorsolateral prefrontal cortex. It also links to the thalamus, which is a subcortical structure that relays a vast range of information between the brainstem and the higher brain areas.

Linking it all together. We don’t know how this all ties together. However, one could speculate ways in which breathlessness is processed in the brain:

Brainstem breathing control. We know that respiratory centres in the brainstem receive peripheral input (input from the rest of the body) and adjust the breathing pattern in response to this. The brainstem is crucial for breathing – without it, no breathing occurs – and it almost certainly plays a part in the actual breathing response to breathlessness. The cerebellum, which is probably involved in the motor response to breathlessness, connects to the higher brain areas through the brainstem. So far so good.

PAG as a gateway. Peripheral input is also believed to be received in the PAG, which could act as a gateway of sorts. The lateral PAG relays signal directly to primary motor/sensory cortices and the (posterior?) insula, and this path may be processing breathlessness intensity. The ventrolateral PAG relays signal directly to the prefrontal cortex, (anterior?) insula and also to motor regions of the brain, and this path may be processing the threat and/or possible responses to the breathlessness. The PAG probably also connects with the thalamus, which in turn acts as a hub (see below).

Thalamus as a hub. Finally, signal could also be relayed via the PAG (or directly) to the thalamus, which has been identified in a range of breathlessness studies and is a common hub for cortical communication. From the thalamus, signal is transmitted to a range of cortical areas (including prefrontal regions and the amygdala via the medial/frontal thalamus, and somatosensory and motor regions via the ventroposterior thalamus). Breathlessness is likely modulated by several of these cortical areas and how they interact.

The complexity of the interactions may be best explained by an example:

Anxiety relating to threat is modulated by activation in both the medial prefrontal cortex (mPFC) and the amygdala. The mPFC possibly influences threat by dampening activation in the amygdala. So far, so good. The prefrontal cortex receives signal from the PAG, and may also receive input from the ACC. Both the ACC and PAG are in turn linked to the anterior insula. The anterior insula also receives input from the thalamus, which relays signal to the amygdala. The thalamus also receives input from the PAG. So now we have a network of interconnected regions which may all influence each other and work to fine-tune the anxiety response. Furthermore, the anterior insula may also receive and integrate information on the physical sensation from the posterior insula, which is linked to the ventroposterior thalamus. This part of the thalamus is connected with somatosensory and motor cortices. These are also influenced by the PAG. The thalamus and somatosensory/motor cortices also relay signal to the precuneus, which in turn could connect to the dorsolateral prefrontal cortex, thus incorporating cognitive processing. And so on. It can get a bit complex.

A suggested network is presented in the figure below.

brain3 Fig 3. Possible breathlessness network. Purple=connected cortical regions, black= signal to/from periphery, blue=signal from PAG. ‘Hubs’ are shown in green. PFC = prefrontal cortex (encompasses both medial (emotional/threat processing) and dorsolateral (cognitive processing) PFC)

In short, various studies have shown that a range of brain regions are activated during breathlessness, but we don’t yet know exactly how they are involved in processing the sensation. We may still only guess at the full picture, based on breathlessness work as well as studies on other unpleasant sensations (pain, mostly), threat or emotional processing. Much of the same processing mechanisms are likely to be found in these similar sensations. While the above figure outlines some possible networks for the processing of breathlessness, based on our current understanding of breathlessness and related conditions, (much) more information is needed.

References:
1. Oertel, B., Preibisch, C., et al. Clin Pharmacol Ther 2008; 83:577–588.
2. Hayen, A., Herigstad, M., et al. NeuroImage 2012; 66: 479-488.
3. Faull, O., Jenkinson, M., et al. eLife 2016; 5: e12047
4. Herigstad, M., Hayen, A., et al. Chest 2015; 148(4): 953-61
5. Harvey, A., Pattinson, K., et al. J Magn Reson Imaging 2008; 28:1337–1344.

Predicting trouble: EEG, NO and stroke

We’ve recently published a paper titled “Electroencephalographic Response to Sodium Nitrite May Predict Delayed Cerebral Ischemia After Severe Subarachnoid Hemorrhage” on how electroencephalography (EEG for short) can be used to figure out which patients with a certain type of stroke (subarachnoid hemorrhage) will develop a complication after the initial brain bleed and which will not.

Subarachnoid hemorrhage is a type of stroke that can happen at any age. It is a bleed on the surface of the brain, in a space between the arachnoid membrane and the pia mater surrounding the brain (see figure below).

Meninges-en

The bleed is usually caused by an aneurysm (a bulging, weak section of a blood vessel) bursting, causing blood to escape into the subarachnoid space. Here, it puts pressure on the brain tissue (causing tissue damage) which can also reduce blood flow to other parts of the brain (causing lack of oxygen and cell death). The released blood may be toxic to the brain tissue and could cause inflammation. At worst, a subarachnoid hemorrhage results in death or severe brain damage.

Some of the damage is caused directly by the bleed (e.g. the pressure on the brain), and some is caused by how the bleed disrupts the normal control of blood flow in the brain. In particular, it is bad when the nitric oxide pathway stops functioning properly. Nitric oxide helps preserve the circulation in small blood vessels in the brain, partly through enlarging the blood vessels (vasodilation) and lowering the gathering of blood-clot forming platelets. After a bleed, nitric oxide levels in the brain are often reduced. This could be because hemoglobin in the blood stops enzymes responsible for producing nitric oxide from working properly. Another contributing factor is that the nitric oxide that is already present in the brain reacts with superoxide (part of the immune response) which leads to the levels of nitric oxide being even further lowered.

Nitric oxide disruption appears to be involved in delayed cerebral ischemia, which is the most common complication after a subarachnoid hemorrhage. For example, people with genetically lower activity in an enzyme that produces nitric oxide have a higher risk of this complication. Delayed cerebral ischemia is the unpredictable lack of oxygen to the brain leading to severe, even fatal, brain damage. This typically happens 3-14 days after the initial subarachnoid hemorrhage.

Which brings us to our question:

Patients who have the same clinical severity, no measurable genetic differences in nitric oxide production, and who seem exactly the same can go on to show very different outcomes. One can develop ischemia and suffer devastating new brain injuries, and the other return to normal without complications. How can we tell who will get it and who will not?

EEG measures neuronal (electrical) activity using electrodes placed on your scalp. It picks up fluctuations in voltage from the electric currents in the neurons, and in the clinical setting it is used to measure the spontaneous electrical activity in a brain over time. Different parts of the brain generate different signals. Some parts show signal with low frequencies (long waves, delta and theta frequencies) and some with short frequencies (rapid waves, alpha frequencies). The signal is linked to blood flow. In subarachnoid hemorrhage, it can be used to see cerebral ischemia develop naturally at an early stage, because as the blood flow is reduced, short frequencies begin to fade and long frequencies steadily increase. In short, the ratio of alpha to delta (for example), will be reduced in the ischemic patients. However, it can take several days of recording to get a result with this method. It is possible, but not practical.

However, we know that nitric oxide disruption seems to be critically involved in delayed cerebral ischemia. This means that we can give patients a nitric oxide donor (sodium nitrite) to stimulate the nitric oxide pathway, and use EEG to measure how well they respond. By doing this, we can speed the process up. We can see if the patient responds well to the sodium nitrite, or not. This means, in short, that we can see which patients show nitric oxide pathway disruption.

Using this method, we showed in our paper that patients who later went on to develop delayed cerebral ischemia showed no change (or a decrease) in the EEG signal whilst infused with sodium nitrite. Those that did not develop delayed cerebral ischemia did the exact opposite, and showed a strong increase in EEG signal.

temp3EEG spectrograms from a patient that did not develop delayed cerebral ischemia, and one who did, plus a scatter plot with the group differences in spectrogram ratio (alpha delta ratio (ADR))

So we have not only shown how the nitric oxide pathway is important in the development of delayed cerebral ischemia, but this also means we now quite possibly may be able to relatively quickly determine who is at risk for this life-threatening complication, and who is not.

Reference: Garry, P.S., Rowland, M.J., Ezra, M., Herigstad, M., Hayen, A., Sleigh, J.W., Westbrook. J., Warnaby, C.E. and Pattinson, K.T.S. (2016) Electroencephalographic Response to Sodium Nitrite May Predict Delayed Cerebral Ischemia After Severe Subarachnoid Hemorrhage. doi: 10.1097/CCM.0000000000001950