Smoking in the scanner?

There is a new paper out in Scientific Reports, titled “Investigating the neural correlates of smoking: Feasibility and results of combining electronic cigarettes with fMRI”. This is a study that have managed to combine actual smoking with functional MRI (fMRI).

Most studies looking at brain processing of smoking run into trouble with MRI. This is because smoking and scanning do not go well together. Hospitals don’t allow smoking, things should generally not be on fire in the MRI scanner, and ventilation is an issue when you’re lying in a narrow bore. Because of this, we haven’t been able to properly look at the sensations and behaviour of smoking alongside the effects of nicotine (and other active products in cigarette smoke). This study tries to get around these practical problems and also look at the brain response to real-time smoking.

For the practical part, the study used e-cigarettes. E-cigarettes solve some of the problems with smoking in the scanner (fire and ventilation to some extent), but can cause image artifacts and may also contain metal. The paper shows how smaller types of e-cigarettes did not cause image artifacts plus were safe to use in the scanner from a metal point of view. E-cigarette smoking is a good mimic for traditional smoking, so this is a workable model of ‘the real thing’ that fits with MRI.

In terms of brain responses, the authors found activation in several brain regions associated with smoking e-cigarettes. These regions included motor cortex, insula, cingulate, amygdala, putamen, thalamus, globus pallidus and cerebellum. There were also (relative) deactivations in the ventral striatum and orbitofrontal cortex associated with smoking.


Image from the paper showing brain responses when participants were instructed to smoke. Red-yellow is activation and blue is deactivation.

Some of this activation is (unsurprisingly) linked to movement. The motor cortex activation (stronger on the left hand side, which correspond to right-hand side motion) is most likely due to movements associated with smoking. Similarly, cerebellar activation is often related to motion. Other regions are associated more with the effects of smoking. The putamen is part of a brain region called the striatum, which plays a role in reward and in supporting addiction. The ventral striatum (and orbitofrontal cortex) are associated with drug craving.

From a personal point of view, having worked a great deal with breathing, I am excited that the paper showed activation in the insula and cingulate. Both are structures involved in breathing and breathlessness tasks. However, without behavioural measures to link the findings to, it is hard to say what this activation means in this setting. It is important to remember that just because a similar activation pattern occurs with two different tasks, it doesn’t necessarily follow that the activation means the same. Each region of the brain typically handle more than one thing, particularly cortical regions.

The authors also found that activation patterns was similar both if the participants were told when to smoke and when to stop (first scan), and if they could smoke at will (second scan). However, in the second scan, the activation was weaker. The authors suggest that this could be because this task was more variable, meaning more between-subject variance and poorer timing (from a fMRI point of view). It could also be an order effect, as the subjects had more nicotine in their system in the second scan. This fits with lower activation in reward-related brain regions in the second scan. Or it could simply be because to smoke on command or whenever one wants to are different situations. Again, it is hard to tell why without other measures.

Nevertheless, this is an interesting paper, both from a methods point of view and for those interested in smoking processing and effects on the brain. It’s also written in a nice and easily accessible way. I’d recommend looking it up:

Reference: Matthew B. Wall, Alexander Mentink, Georgina Lyons, Oliwia S. Kowalczyk, Lysia Demetriou & Rexford D. Newbould. Investigating the neural correlates of smoking: Feasibility and results of combining electronic cigarettes with fMRI. Scientific Reports 7, Article number: 11352 (2017)
DOI: 10.1038/s41598-017-11872-z


Pulmonary rehab: changing the signal

Pulmonary rehabilitation is one of the most effective treatments for breathlessness in chronic obstructive pulmonary disease (COPD), yet its effect is variable. While up to 60% of patients who complete a course of treatment see an improvement, that leaves 40% that do not. Understanding why it works for some and not for others can help personalise and improve treatment for COPD. This is what we’ve focused on in our most recent paper (preprint here) that will be published in the European Respiratory Journal. UPDATE: final published paper here.

A bit of background on how sensations are perceived. When we feel a sensation, our brains often both register and modulate the sensory information from the body. In fact, our sensory perception is probably quite dependent on how the brain processes incoming sensory information. This is influenced by what the brain thinks will happen and why. It is thought that previous experiences (called priors) create expectations in the brain about sensations, and that these are updated whenever the brain receives actual sensory information.

Below is a quote from a paper on how priors influence pain perception by Geuter et al. [1], explaining the concept so clearly I decided to reproduce it in its entirety:

All over the human body, there are receptors that help to alert the brain to potential harm. For example, intense heat on the skin elicits a signal that travels to the brain and activates many parts of the brain. Some of the same brain regions that are switched on by signals of potential bodily harm also help the brain to form expectations about events. A person’s expectations may have a strong influence on how they experience pain. For example, if a person expects that taking a pill will reduce their pain, they may feel less pain even if the pill is a fake.

Exactly how the brain processes pain signals and expectations remains unclear. Does the brain activity simply reflect how intense the heat is? Some scientists think there may be two separate processes going on: one that predicts what will happen and another that calculates the difference between the prediction and what the receptors actually detect. This difference is called a prediction error. If every unpredicted sensory signal elicits a calculation of the prediction error that would help improve the brain’s future predictions.

This system is open to manipulation. There are many factors that can adjust these priors or weight the incoming sensory information, causing the person to over- or under-perceive sensations. For example, anxiety and attentional bias may cause over-perception of sensations.

But how? How does this relate to COPD? In COPD, a prior may be formed linking shortness of breath to physical activity, for example climbing stairs. This prior, if bolstered by for example anxiety and attentional bias, may begin to dominate and cause over-perception of breathlessness. This means that breathlessness perception would be governed more by the prior and the anxiety/fear than by the input from the body. In this example, a simple flight of stairs become a cue for the brain to access its priors, which means generating an expectation of breathlessness and anxiety, all because that is what previous experiences have demonstrated will happen.

Pulmonary rehabilitation, however, challenges these priors. Rehabilitation makes the patient face their breathlessness, but in a safe healthcare setting. This may change the patient’s priors and how they process breathlessness-related cues. If this is the case, we may expect that patients with different priors show different treatment outcome, and we may expect that patients show a different response to cues after treatment than before.

But where? We know that predictions about bodily state and emotion (i.e. priors) are typically generated in a stimulus valuation network. This network consists of many brain regions, including the anterior insula, anterior cingulate cortex (ACC), orbitofrontal cortex and ventromedial prefrontal cortex. There are also more ‘downstream’ regions associated with breathing, including the posterior insula, which process incoming respiratory sensory information. These are responsible for sending sensory information from the body to other parts of the brain (both those dealing with the physical sensation and those processing the emotional impact such as the stimulus valuation network). The posterior insula, along with regions such as the angular gyrus and the supramarginal gyrus, are involved with how much attention a physical sensation gets. All of these regions might be likely places where pulmonary rehabilitation would change activation patterns.

What we did. We recruited 31 people with COPD and studied them before and after pulmonary rehabilitation. On each visit, we did the same tests: we collected a set of behavioural questionnaires (of which we used one, the Dyspnoea-12 [2], as our main measure of breathlessness); we did a lung function and an exercise test; and we did a functional brain scan (FMRI) to test their brain activity while they were looking at (and rating) breathlessness-related cues for anxiety (How anxious would this make you feel?”) and breathlessness (“How breathless would this make you feel?”). *

Behavioural changes. The ratings of the patients were overall much lower for anxiety after rehabilitation, and this correlated with the main measure of breathlessness (Dyspnoea-12). The correlation was influenced by changes in depression in our patients, although we don’t know whether it is the depression that influences anxiety and breathlessness, the anxiety that influences depression and breathlessness, or the breathlessness that influences anxiety and depression. It may easily be that all of these factors influence each other. We do, however, know that they are linked. The figure below (Fig 1) shows how all the behavioural and physiological measures are correlated.

prepost1bFig 1. Correlation matrices of the measured behavioural variables. Abbreviations: wA, cue ratings of anxiety; wB, cue rating of breathlessness; StG, St Georges Respiratory score; Cat, Catastrophising score; Vig, Vigilance/Awareness score; Dep, Depression score; T Anx, Trait anxiety; S Anx, State Anxiety; Fat, Fatigue; BisBas, inhibition/activation scale; Spir, lung function (FEV1/FVC); ISWT, exercise ability (incremental shuttle walk test).**

While rehabilitation worked for the group as a whole, we saw that there was variability in the treatment response between patients. There was also no improvement in breathlessness ratings, nor was there any change in lung function in the group. Lung function was not linked to any of the behavioural measures, meaning that it isn’t a good measure of the impact of breathlessness in COPD.

Brain changes. Then we looked at how variation in brain activity explained the variation in our patients’ ratings of the cues over the course of their treatment. By looking at how variation in brain activity follows variation in ratings, we could make sure that even the patients that didn’t respond normally to treatment were included in the analysis. In other words, if a patient didn’t respond it is likely that their brain activation would not change either, and if a patient got worse we might see that their brain activation went in a different direction from those that got better. This gives us a much stronger idea of which areas get upregulated and downregulated (or stays the same) with successful treatment.

Looking at this variation, we saw that reduced breathlessness was linked with less activation in some brain regions (the anterior insula, ACC, posterior insula and supramarginal gyrus). This is a dampening in activity in brain areas handling expectations of breathlessness, and it could mean that successful treatment works by making patients re-evaluate their priors. Reduced anxiety was linked with greater activation in a slightly different set of brain regions (the posterior cingulate cortex, angular gyrus, primary motor cortex and supramarginal gyrus). As a set, these are involved with how much attention a physical sensation gets, and may be dampened by anxiety. In other words, if you are anxious, it is difficult to regulate how much attention you give a thing (i.e. if you are scared of spiders, you can’t just ignore one if you see one). So when we see an increase in these regions, this may mean that the patients are less anxious and more able to regulate attention. Taken together, this suggests that our patients had a more objective processing of breathlessness cues and were less dominated by their priors after rehabilitation.

prepost1.jpgFig 2. Change in brain activity that fits with rehabilitation-induced changes in response to breathlessness cues (both for anxiety and for breathlessness). Blue colours mean lower brain activity, and red/yellow colours mean higher brain activity. **

Predicting treatment outcome. We also looked at whether brain activation before the treatment could predict who would benefit from the treatment and who would not. Several regions showed higher activation in those patients who went on to improve with treatment. These included the stimulus valuation network plus the primary motor cortex. Improvements in anxiety ratings were predicted by high activation in the ACC and ventromedial prefrontal cortex, which overlaps with one of our previous studies looking at breathlessness and anxiety in COPD patients versus healthy controls [3]. These findings are also supported by a study that showed how higher fear levels before pulmonary rehabilitation tends to mean a greater response to treatment [4].

prepost3Fig 3. Brain activity before treatment that is linked with treatment outcome, both in terms of breathlessness (top) and anxiety (bottom). **

To conclude. Pulmonary rehabilitation seems to lead to reduced activity in the brain’s stimulus valuation network and increased activity in attention regulating networks. Those with strong responses in the stimulus valuation network before pulmonary rehabilitation typically see a bigger reduction in their responses to breathlessness cues after treatment. It may be that pulmonary rehabilitation works both by updating breathlessness-related priors and by reducing feelings of depression and anxiety that typically influence sensory processing. *** If this is the case, then we could improve treatment by focusing on re-learning priors, either by using drugs or alternative behavioural therapies. We could also use MRI as a way of developing behavioural tests (questionnaires, computerised tasks) that can be used to figure out who will benefit the most and in which way from the treatment.

[1] Geuter, S. et al. eLife 2017; 6:e24770
[2] Yorke, J. et al. Thorax 2010; 65: 21-26
[3] Herigstad, M. et al. Chest 2015; 148(4): 953-961
[4] Janssens, T. et al. Chest 2011; 140: 618-625

*The FMRI analysis used standard significant thresholds (cluster Z = 2.3, corrected cluster p = 0.05 corrected for multiple comparisons across the whole brain).
**Adapted from Herigstad et al, 2017, biorxiv: The copyright holder for this preprint is the author/funder. It is made available under a CC-BY 4.0 International license
***In addition to potential improvements in fitness. We did see an increase in exercise capacity, even if none of the measured baseline physiological variables were changed, and it is possible that the rehabilitation causes the patients to become healthier and stronger.

The paper is available from here:

The published paper is out here

Full citation: Mari Herigstad, Olivia K. Faull, Anja Hayen, Eleanor Evans, F. Maxine Hardinge, Katja Wiech, Kyle T.S.Pattinson. Treating breathlessness via the brain: changes in brain activity over a course of pulmonary rehabilitation. 

MRI and motion correction

Magnetic resonance imaging is sensitive to motion. Just like with other images, movement may cause blurring and distortion (‘artefacts’). To counteract this, motion correction methods are often used. These include devices that track motion as well as software that can correct some of the artefacts after the images have been collected. We have just published a paper on a potential new way to do this, using a wireless accelerometer (link, open access [1]), so here is a quick blog post about motion and MRI, explaining some of our findings along the way.

technologies-05-00006-g001The GE Firefly scanner, 3T

One of the reasons for doing this work is that we are using a new scanner for newborn babies. Motion is always an issue in MRI, even for adults, but the scanning of newborns may be particularly vulnerable. It is not always easy to convince a newborn baby to remain still. Newborns may move several centimetres where adults only shift a few millimetres. As the newborn is smaller in size, this movement also has greater impact in that it can completely displace the (much smaller) structure of interest. Newborn babies also show differences in physiology compared to adults, which can affect the scan. For example, they breathe faster and less regular, and the resulting motion is transmitted to the head to a greater degree (due to the smaller distance between head and chest) [2].

Types of motion
Motion comes in many types. There is microscopic motion, related to for example circulation of blood or water diffusion, and there is macroscopic motion, related to whole-body movement and physiological functions, for example breathing movement. It may be periodic (e.g. breathing movement), intermittent (e.g. yawns, hiccups) or continuous (e.g. general unsettledness, scanner vibrations). In research settings, noise and motion may be induced by experimental procedures [3]. Motion causes artefacts such as blurring, signal loss, loss of contrast and even the replication of signal in wrong places (‘ghosting’) – all lowering the quality of the image. An example of a motion artefact can be seen in the image below.

wireless_fig2Fast Spin Echo image. Left: no motion artefact; Right: artefact due to in-plane rotational head movement. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In the figure above, there are lines on the right-hand scan (red arrow), which are distortions. These distortions were created because the head rotated slightly whilst it was being scanned. Too much distortion and the image will become less useful for clinical and experimental purposes.

Types of motion correction
There are many types of MRI motion correction. The simplest may often be to prevent and minimise movement using coaching of the patient, sedation and fast and/or motion-resistant imaging protocols. A fast scan with a still individual will usually give very little motion. However, this may not always be possible. Patients do not always lie still, sedation may not always be a good idea, and even our best imaging sequences can be vulnerable to movement to some extent. Large movement is therefore often best tackled through different means: it is detected and corrected for. Correction can be done during the scan (real-time) or after the scan (creation of the image from the raw data and/or post-processing of the image).

There are limits to this type of large movement correction. For example, we can use so-called navigator pulses during the scan to correct for movement in real time, but they tend to make scans take much longer. We can also use tracking devices to correct for motion both during and after a scan, but such devices are limited by the level of motion they can detect and require a fair bit of extra equipment to work inside or interact with the scanner. Finally, we can correct for motion in reconstruction or post-processing, but this too usually takes a lot of time and effort. Which type(s) of correction method is best may differ between different types of scan, patients, and experimental protocols and so on.

In our paper, we used an external motion measuring device – a wireless accelerometer, similar to one that you may buy for fitness purposes – to measure motion of the head. The nice thing about this is that it can give us full real-time 3D motion information about how the head moves. It is not like a visual device, which needs a clear line of sight to be able to ‘observe’ the head at all times. The accelerometer gave us continuous, wireless feedback on the angle of the object being scanned. We could then use this information to adjust the MR data, using a motion correction algorithm. The algorithm, using movement data from the accelerometer, adjusted how the MR signal was recorded at each given time point. We were in short using the signal from the accelerometer to shift k-space.

This meant that shifts in signal due to movement could in theory be recorded and, at least partly, fixed. Conversely, it also meant that motion could be introduced in a motion-free image. To introduce motion, we first made a motion data file with the accelerometer, simply by manually rotating it and recording the angles. We then applied this motion data file to the raw data of a motion-free scan. The motion file was used to shift signal in k-space for each affected phase encode step. Doing this, we could distort the image in the same way that real motion would cause distortions, despite there being no original motion in the MR data. We could ramp this up as we pleased, adding more and more ‘motion’, as shown in the figure below.

MR images incorporating increasing amounts of motion. (a) Original no-motion image, (b-f) motion applied, starting with 2 × 10−2 radians (b) and doubled for each successive image. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In principle, reversal of the motion effects should be possible. The motion in the figure above was introduced using a standard rotation matrix which multiplied the k-space locations by the measured angle, and if we reverse this process (i.e. counter-rotate the k-space data according to the measured angles), removing the noise should be possible. As with most things, it is easier to break than fix, yet we did see a subtle reversal of motion artefacts for a simple side-to-side rotation. This means that a wireless accelerometer may eventually be used to retrospectively correct for motion in neonatal MRI scans. It is also possible that it could be used for guiding real-time correction methods.

1. Paley, M., Reynolds, S., Ismail, N., Herigstad, M., Jarvis, D. & Griffiths, P. Wireless Accelerometer for Neonatal MRI Motion Artifact Correction. Technologies. 2016; 5(1): 6. doi:10.3390/technologies5010006
2. Malamatenioua, C., Malika, S., Counsella, S., Allsopa, J., McGuinnessa, A., Hayata, T., Broadhousea, K., Nunesa, R., Ederiesc, A., Hajnala, J. & Rutherford, M. Motion-compensation techniques in neonatal and fetal MR imaging. Am J Neuroradiol. 2013; 34(6):1124-36.
3. Hayen, A., Herigstad, M., Kelly, M., Okell, T., Murphy, K., Wise, R., & Pattinson, K. The effects of altered intrathoracic pressure on resting cerebral blood flow and its response to visual stimulation. NeuroImage. 2012; 66: 479-488. doi: 10.1016/j.neuroimage.2012.10.049.

MRI and k-space

MRI images are created from raw data contained in a raw data space, called k-space. This is a matrix where MR signals are stored throughout the scan. K-space is considered a bit of a tricky topic, so I will only outline a brief explanation of what k-space is and how it relates to the MR image.

Kspace.pngThis is a visualisation of k-space.

K-space is what? The first thing to recognise is k-space is not a real space. It is a mathematical construct with a fancy name – a matrix used to store data. The data points are not simple numbers, they are spatial frequencies. Unlike ‘normal’ frequencies, which are repetitions per time, spatial frequencies are repetitions per distance. They are, in other words, waves in real space, like sound waves. We usually measure them as cycles (or line pairs) per mm. The number of cycles per mm is called the wavenumber, and the symbol used for wavenumber is ‘k’ – hence k-space.

It may be easiest to envision these spatial frequencies as variation in the brightness of the image (or variation in signal spatial distribution, if we are to get more technical). So we have a large matrix of brightness variations, which together makes up the complex image. The matrix is full when the image is completed (i.e. the scan is done).


K-space relates to the image how? The second thing to recognise is that k-space corresponds (although not directly) to the image. If the image size is 256 by 256, then the k-space will have 256 columns and 256 rows. This doesn’t mean, however, that the bottom right hand k-space spatial frequency holds the information for the bottom right hand image pixel. Each spatial frequency in k-space contains information about the entire final image. In short, the brightness of any given ‘spot’ in k-space indicates the amount that that particular spatial frequency contributes to the image. K-space is typically filled line by line (but we can also fill it in other ways).

Frequencies across k-space. Typically, the edges of k-space has high spatial frequency information compared to the centre. Higher spatial frequency gives us better resolution, and lower spatial frequencies better contrast information. Think of it like this: when you have abrupt changes in the image, you also get abrupt variations in brightness, which means high spatial frequencies. No abrupt changes means lower spatial frequencies. This in effect means that the middle of k-space contain one type of information about the image (contrast), and the edges contain another type of information (resolution/detail). If you reconstruct only the middle of k-space, you get all the contrast/signal (but no edges) – like a watercolour – and if you reconstruct only the edges of k-space, you get all edges (but no contrast) – like a line drawing. Put them together, however, we get all the information needed to create an image.


Transformation: Notes and chords. The route from k-space to image is via a Fourier transform. The idea behind a Fourier transform is that any waveform signal can be split up in a series of components with different frequencies. A common analogy is the splitting up of a musical chord into the frequencies of its notes. Like notes, every value in k-space represents a wave of some frequency, amplitude and phase. All the separate bits of raw signal held in k-space (our notes) together can be transformed into the final MRI image (our chord, or perhaps better: the full tune). One important aspect of the Fourier transform is that it is not dependent on a ‘full’ k-space. We may leave out a few notes and still get the gist of the music, so to speak.

Cutting the k-space. We have types of scan that only collect parts of k-space. These are fast imaging sequences, but they have less signal to noise. Less signal to noise means that the amount of signal from the object being measured is reduced compared to the noise that we encounter when we image the object. We always get a certain amount of noise when imaging: from the scanner, the environment and sometimes also the object being imaged. A low signal to noise ratio is typically bad for image quality. Nevertheless, we can get away with collecting fewer lines in k-space when we image something relatively symmetrical, like a human brain. This allows us to mathematically fill in the missing lines in k-spaces, from an assumption of symmetry. Some fast scan sequences sample only every second line in k-space, and we can still reconstruct the image from the information gathered.

The problem of motion. If there was any motion during the scan, there will be inconsistencies in k-space. The scanner does not ‘know’ that the object moved, so it tries to reconstruct the image from data that is not consistent. This means that the reconstruction via the Fourier transform will be flawed, and we can get signal allocated to the wrong place, signal replicated where it should not be or signal missing altogether. We see this as distortions or ‘artefacts’ in the final image. As the data is collected over time, the longer the collection time, the more vulnerable the scan is to motion. Data collected in the phase encoding direction (which takes a relatively long time, see my previous post on gradients) is therefore more vulnerable than data collected in the frequency encoding direction. In a later blog post, I will discuss how we can use motion measurements to adjust the spatial frequencies k-space in a way that removes the effect of the movement. This is partly possible due to us not needing the full k-space to do a Fourier transform and get the image, as described above.

Summary. The take home message on k-space is as follows:

  1. K-space is a matrix for data storage with a fancy name.
  2. It holds the spatial frequencies of the entire scanned object (i.e. the variations in brightness for the object).
  3. We can reconstruct the image from this signal using a Fourier transform.
  4. The reconstruction can still work if we sample only parts of k-space, meaning that we can get images in much shorter time.
  5. Problems during data collection (i.e. motion) may cause errors in k-space that appears as ‘artefacts’ in the image after the Fourier transform.


Caveat. There is more to k-space than what I have described above, as a medical physicist would tell you. While it is good to have an appreciation of what k-space is and is not, a detailed understanding is generally not needed for those of us simply using MRI in our work rather than working on MRI. This post is meant as a brief(ish) introduction, not a detailed explanation, and contains some simplifications.

MRI: location, location, location

Say you want to give an orange an MRI scan. You pop it in the scanner, apply your radiofrequency pulse, and receive the signal. This signal will differ depending on which part of the orange it comes from, for example whether it is the flesh or the peel. But how would you be able to tell where the different types of signal come from? How would you tell that the peel signal is coming from the surface and the flesh signal from inside? In my previous post, I outlined what happens during an MRI scan. Here, I will (briefly) explain how we can tell which signal comes from which place in the object that we are scanning. This has to do with gradients.

Gradients. Gradients are additional magnetic fields that we apply to specify the location that we want to image. These fields vary slightly from the main magnetic field. The resulting overall field (the main field plus the gradients) now contains variation. Specifically, it changes linearly with distance. Protons will experience different fields depending on where they are in the object, and as a result, they have different frequencies. In other words, we get a different signal from the protons depending on where they are in the object that we image.

We have two types of gradients that we employ: frequency-encoding gradients and phase-encoding gradients.

Frequency encoding is quite simple. We take our original field (B0), add a gradient field (Gf), and we get a new field (Bx) for any given point (x) in space. Like this: Bx = B0 + xGf

The spin frequency of the protons depends on their position along this new field. The frequency at point x is the sum of the frequency induced by the main field, and that induced by the new gradient. We can now differentiate between signal from different parts of the brain to some extent (see image below).

kspace2When we add a frequency encoding gradient (blue) to our main field (green), we can differentiate between signal along this gradient axis (blue arrow). We cannot tell the difference between signal along the other axis (red arrow).

To complete the picture, we need to add phase encoding gradients. We do this by applying another gradient designed to make signal fall out of sync along the phase encoding direction (red arrow axis in the image above). Although the frequency of the signal remains the same, the new gradient will cause it to have a different phase depending on where it comes from (along the red arrow axis). Think of it as a long line of cars waiting at a red light, all in sync. The light turns green, and the first car starts, then the second, then the third, and so on. Now, the cars are out of sync: some are standing still, others are slowly moving, and the first cars are moving at full speed. Also, at the same time, the cars at the back of the queue have become tired of waiting and start reversing (first the one all the way at the back, then the next one, and so on). The full speed profile of the row of cars suddenly has a great deal of variation, with some going forward at different speeds and some going backward at different speeds. In terms of MRI, the shift in phase causes the sum of the frequencies (the sum of the car speeds) along the red arrow axis to vary, and we can use this variation to figure out the individual signals (or individual cars) along the axis.

Through these frequency and phase encoding steps, we can figure out the strength of the signal and location of signal along both axes.

Frequency vs Phase Encoding. An important difference between frequency encoding and phase encoding is speed. Frequency encoding can be done with a single radiofrequency pulse (see my previous blog post on what happens during an MRI). We apply a gradient and a pulse, and all the frequencies can be read all at once. It is fast. For phase encoding, we need to apply a radiofrequency pulse for each phase change. We apply our pulse and then the gradient, let the phase change happen, measure the change, and then we have to apply another gradient and phase change. For a 256 pixel image, we will have to do this 256 times, each taking as long as one TR (repetition time). It is slow. A consequence of this is that the phase encoding is more vulnerable to movement than the frequency encoding.

In my next blog post, I will describe how we store the MRI signal during the scan and how the signal can be transformed into an image.

What happens during an MRI?

MRI is complex, but the basic events in the scanner are quite straight-forward. Below is a short, simple guide to what happens during an MRI scan without too much physics to complicate matters. It explains the actual events in the scanner and a simplified overview of the parameters we can use to change the images we collect.


  1. First we align the protons in the object we want to scan with the magnetic field of the scanner (B0). This happens naturally when the object is placed into the scanner, as the slightly magnetic protons adjust themselves to match the magnetic field of the scanner.
  2. Second, we use a radio frequency (RF) pulse to ‘tip’ these aligned protons out of their alignment with the magnetic field. This pulse is sometimes called B1 and is applied at a 90° angle to B0. This causes the protons to rotate, and we can measure this rotation with RF measurement coils. RF coils are essentially loops of wire, and the changing magnetic flux of the protons induces an electric current through these loops. This is because changes in electrical currents generate magnetic fields, and changes in magnetic fields generate electrical currents. RF coils may often be designed to both deliver the RF pulse (through an applied electrical current) and receive the signal (through the resulting change in magnetic flux). During the rotation, two things happen.
    1. Protons begin to align themselves again with B0. The speed of this realignment is called the T1 relaxation time. For any given type of object (or tissue, if we are doing medical imaging), the composition of the object will cause its protons to realign at different rates. Faster realignment means brighter signal.
    2. Protons become out of phase with eachother. This reduces the signal we can measure with our coil (as the rotation of the photons are no longer ‘pulling in the same direction’). The speed at which this happens is called T2 relaxation time. Protons remaining in phase for longer means brighter signal.
  3. The RF pulse is applied again, to repeat the procedure. The average of all these repeats gives us a clear MR image.

The contrasts of the scan (T1, T2) are determined by two parameters: relaxation time (TR) and echo time (TE).

TR is the time between the RF pulses. If we have a long TR, all protons in the object have time to realign with B0. If we have a short TR, some protons may not have fully realigned by the time the next RF pulse arrives. In terms of medical imaging, some tissues will need longer to have all their protons realign than other tissues. If the ‘slow’ tissues have not realigned within the TR, the signal from these tissues will be less than the ‘fast’ tissues. This way, we can tell the difference between different tissues.

TE is the time we use to measure the signal induced by the rotating protons. Some types of tissue will have protons that fall out of phase (‘dephase’) faster than protons in other types of tissue. For example, protons that are in fluids have less obstacles, and will remain in phase for quite a long time. Protons that are constrained by structures may not remain in phase that long. Longer TEs means that the protons have more time to dephase, and this will reduce the signal from tissues that dephase quickly more than from tissues that dephase slowly.

Generally speaking, we have three types of contrast: T1-weighted, T2-weighted and proton-density (PD) weighted

A scan sequence with a short TR and short TE is usually called T1-weighted. By ‘a short TE’ we usually mean that the TE is shorter than the T2. In other words, there is not enough time for the protons dephasing properly, and the T2 effects are masked. The shorter TR, on the other hand, means we can easily differentiate between tissues with longer and shorter T1. The scan is therefore T1-weighted. Tissues that are bright in T1-weighted scans are fat and white brain matter. Muscle and grey brain matter are less bright (grey in colour), and fluids tend to be black.

A long TR and long TE scan sequence is usually called T2-weighted. Longer TE means that we get differentiation based on protons dephasing at different rates, and the T2 effects are visible. The longer TR, however, means all tissues have time to have all their protons realigned to B0, so we get no differentiation based on T1 times. The scan is therefore T2-weighted. Tissues that are bright in T2-weighted scans are fat and fluids. Muscle and grey brain matter are grey, and white brain matter is almost black.

A long TR and a short TE means we get both T1 differentiation and T2 differentiation, and we call this proton-density weighted. This gives us the actual density of protons in the tissues. A short TR and long TE means we get neither T1 nor T2 differentiation. We don’t use this type of scan, as it doesn’t yield any useful information.


In this post, I have summarised some of the basics about MR imaging. In my next post, I will move on to outline some of the basics about raw MR data processing, covering k-space and Fourier transforms.

Reblog: The Firefly Scanner is Featured on the BBC

Below is a blog post I wrote for our group after the Firefly neonatal scanner was highlighted on BBC. It’s a great, little scanner and I’m thrilled to be involved with this research. (Link to the post on our group’s blog.)

Building on previous work, Professor Martyn Paley has developed the concept of a bespoke MRI scanner for newborn babies (neonates) along with Professor Paul Griffiths. The result is a unique, full-strength neonate scanner, built by GE Healthcare and installed in the Neonatal Intensive Care Unit in the Jessop Wing. Named ‘Firefly’, the scanner is one of only two such prototype scanners in the world, and uniquely marries diagnostic imaging with easy access to our neonatal unit.

Featured last week on the BBC, the Firefly scanner has gained deserved attention. The BBC’s video of the scanner in action shows how important it is for the healthcare of newborn babies to have powerful scanning facilities within quick and easy reach. Tiny compared to adult scanners (which can easily weigh several tons), the Firefly would be able to fit in many small Neonatal Intensive Care Units. This is a major advantage over the more commonly used ultrasound imaging in providing ready access to high quality brain imaging.

Babies can be difficult to image as they rarely stay still. Our group have also recently published one of the first research papers with data collected using the Firefly scanner, in which we discuss a potential new way of correcting motion during MRI in babies. The paper is titled “Wireless Accelerometer for Neonatal MRI Motion Artifact Correction” and freely available.

We are delighted to have the Firefly scanner in Sheffield. It is an important clinical development and opens up exciting new possibilities for linking research on reproduction and development with the health of newborn babies.

technologies-05-00006-g001The Firefly 3-Tesla neonate MRI scanner in the Jessop Wing. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)