MRI and motion correction

Magnetic resonance imaging is sensitive to motion. Just like with other images, movement may cause blurring and distortion (‘artefacts’). To counteract this, motion correction methods are often used. These include devices that track motion as well as software that can correct some of the artefacts after the images have been collected. We have just published a paper on a potential new way to do this, using a wireless accelerometer (link, open access [1]), so here is a quick blog post about motion and MRI, explaining some of our findings along the way.

technologies-05-00006-g001The GE Firefly scanner, 3T

One of the reasons for doing this work is that we are using a new scanner for newborn babies. Motion is always an issue in MRI, even for adults, but the scanning of newborns may be particularly vulnerable. It is not always easy to convince a newborn baby to remain still. Newborns may move several centimetres where adults only shift a few millimetres. As the newborn is smaller in size, this movement also has greater impact in that it can completely displace the (much smaller) structure of interest. Newborn babies also show differences in physiology compared to adults, which can affect the scan. For example, they breathe faster and less regular, and the resulting motion is transmitted to the head to a greater degree (due to the smaller distance between head and chest) [2].

Types of motion
Motion comes in many types. There is microscopic motion, related to for example circulation of blood or water diffusion, and there is macroscopic motion, related to whole-body movement and physiological functions, for example breathing movement. It may be periodic (e.g. breathing movement), intermittent (e.g. yawns, hiccups) or continuous (e.g. general unsettledness, scanner vibrations). In research settings, noise and motion may be induced by experimental procedures [3]. Motion causes artefacts such as blurring, signal loss, loss of contrast and even the replication of signal in wrong places (‘ghosting’) – all lowering the quality of the image. An example of a motion artefact can be seen in the image below.

wireless_fig2Fast Spin Echo image. Left: no motion artefact; Right: artefact due to in-plane rotational head movement. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In the figure above, there are lines on the right-hand scan (red arrow), which are distortions. These distortions were created because the head rotated slightly whilst it was being scanned. Too much distortion and the image will become less useful for clinical and experimental purposes.

Types of motion correction
There are many types of MRI motion correction. The simplest may often be to prevent and minimise movement using coaching of the patient, sedation and fast and/or motion-resistant imaging protocols. A fast scan with a still individual will usually give very little motion. However, this may not always be possible. Patients do not always lie still, sedation may not always be a good idea, and even our best imaging sequences can be vulnerable to movement to some extent. Large movement is therefore often best tackled through different means: it is detected and corrected for. Correction can be done during the scan (real-time) or after the scan (creation of the image from the raw data and/or post-processing of the image).

There are limits to this type of large movement correction. For example, we can use so-called navigator pulses during the scan to correct for movement in real time, but they tend to make scans take much longer. We can also use tracking devices to correct for motion both during and after a scan, but such devices are limited by the level of motion they can detect and require a fair bit of extra equipment to work inside or interact with the scanner. Finally, we can correct for motion in reconstruction or post-processing, but this too usually takes a lot of time and effort. Which type(s) of correction method is best may differ between different types of scan, patients, and experimental protocols and so on.

In our paper, we used an external motion measuring device – a wireless accelerometer, similar to one that you may buy for fitness purposes – to measure motion of the head. The nice thing about this is that it can give us full real-time 3D motion information about how the head moves. It is not like a visual device, which needs a clear line of sight to be able to ‘observe’ the head at all times. The accelerometer gave us continuous, wireless feedback on the angle of the object being scanned. We could then use this information to adjust the MR data, using a motion correction algorithm. The algorithm, using movement data from the accelerometer, adjusted how the MR signal was recorded at each given time point. We were in short using the signal from the accelerometer to shift k-space.

This meant that shifts in signal due to movement could in theory be recorded and, at least partly, fixed. Conversely, it also meant that motion could be introduced in a motion-free image. To introduce motion, we first made a motion data file with the accelerometer, simply by manually rotating it and recording the angles. We then applied this motion data file to the raw data of a motion-free scan. The motion file was used to shift signal in k-space for each affected phase encode step. Doing this, we could distort the image in the same way that real motion would cause distortions, despite there being no original motion in the MR data. We could ramp this up as we pleased, adding more and more ‘motion’, as shown in the figure below.

wireless_fig3.jpg
MR images incorporating increasing amounts of motion. (a) Original no-motion image, (b-f) motion applied, starting with 2 × 10−2 radians (b) and doubled for each successive image. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In principle, reversal of the motion effects should be possible. The motion in the figure above was introduced using a standard rotation matrix which multiplied the k-space locations by the measured angle, and if we reverse this process (i.e. counter-rotate the k-space data according to the measured angles), removing the noise should be possible. As with most things, it is easier to break than fix, yet we did see a subtle reversal of motion artefacts for a simple side-to-side rotation. This means that a wireless accelerometer may eventually be used to retrospectively correct for motion in neonatal MRI scans. It is also possible that it could be used for guiding real-time correction methods.

References: 
1. Paley, M., Reynolds, S., Ismail, N., Herigstad, M., Jarvis, D. & Griffiths, P. Wireless Accelerometer for Neonatal MRI Motion Artifact Correction. Technologies. 2016; 5(1): 6. doi:10.3390/technologies5010006
2. Malamatenioua, C., Malika, S., Counsella, S., Allsopa, J., McGuinnessa, A., Hayata, T., Broadhousea, K., Nunesa, R., Ederiesc, A., Hajnala, J. & Rutherford, M. Motion-compensation techniques in neonatal and fetal MR imaging. Am J Neuroradiol. 2013; 34(6):1124-36.
3. Hayen, A., Herigstad, M., Kelly, M., Okell, T., Murphy, K., Wise, R., & Pattinson, K. The effects of altered intrathoracic pressure on resting cerebral blood flow and its response to visual stimulation. NeuroImage. 2012; 66: 479-488. doi: 10.1016/j.neuroimage.2012.10.049.

Reblog: “I fell into this by accident”

I planned my career, but not all went exactly as I envisioned. Opportunities emerged, unexpected findings were too interesting not to pursue, new studies arose from old ones, new collaborators came up with exciting suggestions, and pints with colleagues often turned into interesting, half-crazy ideas that somehow worked. As a result, I have worked across several fields (at the peril of not being able to self-cite very often), and gained a great deal of unexpected experience. It has worked out pretty well, if I do say so myself. Sheffield University’s Think Ahead research support group recently published a short blog post on Krumboltz, Levin and Mitchell’s ‘Planned Happenstance’ theory, which resonated with me. This focuses on making the best of everyday opportunities to succeed and contains some valuable suggestions on how to do this. The Think Ahead post is a neat overview of the theory and worth a read.

Career planning is often presented as applying a logical, step by step approach following a linear pattern to develop your career but the reality is that life in general and our career thinking more specifically often don’t work that way. We have a huge number of options open to us and deciding what is the […]

Read the rest of “I fell into this by accident” at the Think Ahead Blog

Reblog: How do we know what we know?

“All hypotheses emerge from assumptions, whether we recognize them or not.”

Ambika Kamath, a graduate student at Harvard University working on lizards and how their habitat, behaviour, and morphology influence eachother, has written an excellent blog post in the wake of a human dimorphism debate. The post is not about human dimorphism, but instead highlights how our assumptions can shape experimental design and therefore results. It can be easy to accept oft-cited facts without critical thought, particularly if they are in line with personal opinion.

Ambika Kamath’s post is reblogged below, and I encourage you to read it.

Ambika Kamath

Over the last few months, there’s been a slow-boiling battle underway between Holly Dunsworth and Jerry Coyne about the evolution of sexual dimorphism in humans, surrounding the question of why male and female humans, on average, differ in size. The battlefield ranged from blogposts to twitter to magazine articles. In a nutshell, Coyne argued that “sexual dimorphism for body size (difference between men and women) in humans is most likely explained by sexual selection” because “males compete for females, and greater size and strength give males an advantage.” His whole argument was motivated by this notion that certain Leftists ignore facts about the biology of sex differences because of their ideological fears, and are therefore being unscientific.

Dunsworth’s response to Coyne’s position was that “it’s not that Jerry Coyne’s facts aren’t necessarily facts, or whatever. It’s that this point of view is too simple and is obviously biased toward some stories, ignoring others. And…

View original post 1,359 more words

MRI and k-space

MRI images are created from raw data contained in a raw data space, called k-space. This is a matrix where MR signals are stored throughout the scan. K-space is considered a bit of a tricky topic, so I will only outline a brief explanation of what k-space is and how it relates to the MR image.

Kspace.pngThis is a visualisation of k-space.

K-space is what? The first thing to recognise is k-space is not a real space. It is a mathematical construct with a fancy name – a matrix used to store data. The data points are not simple numbers, they are spatial frequencies. Unlike ‘normal’ frequencies, which are repetitions per time, spatial frequencies are repetitions per distance. They are, in other words, waves in real space, like sound waves. We usually measure them as cycles (or line pairs) per mm. The number of cycles per mm is called the wavenumber, and the symbol used for wavenumber is ‘k’ – hence k-space.

It may be easiest to envision these spatial frequencies as variation in the brightness of the image (or variation in signal spatial distribution, if we are to get more technical). So we have a large matrix of brightness variations, which together makes up the complex image. The matrix is full when the image is completed (i.e. the scan is done).

kspace.jpg

K-space relates to the image how? The second thing to recognise is that k-space corresponds (although not directly) to the image. If the image size is 256 by 256, then the k-space will have 256 columns and 256 rows. This doesn’t mean, however, that the bottom right hand k-space spatial frequency holds the information for the bottom right hand image pixel. Each spatial frequency in k-space contains information about the entire final image. In short, the brightness of any given ‘spot’ in k-space indicates the amount that that particular spatial frequency contributes to the image. K-space is typically filled line by line (but we can also fill it in other ways).

Frequencies across k-space. Typically, the edges of k-space has high spatial frequency information compared to the centre. Higher spatial frequency gives us better resolution, and lower spatial frequencies better contrast information. Think of it like this: when you have abrupt changes in the image, you also get abrupt variations in brightness, which means high spatial frequencies. No abrupt changes means lower spatial frequencies. This in effect means that the middle of k-space contain one type of information about the image (contrast), and the edges contain another type of information (resolution/detail). If you reconstruct only the middle of k-space, you get all the contrast/signal (but no edges) – like a watercolour – and if you reconstruct only the edges of k-space, you get all edges (but no contrast) – like a line drawing. Put them together, however, we get all the information needed to create an image.

watercolour-1578069_960_720.jpg

Transformation: Notes and chords. The route from k-space to image is via a Fourier transform. The idea behind a Fourier transform is that any waveform signal can be split up in a series of components with different frequencies. A common analogy is the splitting up of a musical chord into the frequencies of its notes. Like notes, every value in k-space represents a wave of some frequency, amplitude and phase. All the separate bits of raw signal held in k-space (our notes) together can be transformed into the final MRI image (our chord, or perhaps better: the full tune). One important aspect of the Fourier transform is that it is not dependent on a ‘full’ k-space. We may leave out a few notes and still get the gist of the music, so to speak.

Cutting the k-space. We have types of scan that only collect parts of k-space. These are fast imaging sequences, but they have less signal to noise. Less signal to noise means that the amount of signal from the object being measured is reduced compared to the noise that we encounter when we image the object. We always get a certain amount of noise when imaging: from the scanner, the environment and sometimes also the object being imaged. A low signal to noise ratio is typically bad for image quality. Nevertheless, we can get away with collecting fewer lines in k-space when we image something relatively symmetrical, like a human brain. This allows us to mathematically fill in the missing lines in k-spaces, from an assumption of symmetry. Some fast scan sequences sample only every second line in k-space, and we can still reconstruct the image from the information gathered.

The problem of motion. If there was any motion during the scan, there will be inconsistencies in k-space. The scanner does not ‘know’ that the object moved, so it tries to reconstruct the image from data that is not consistent. This means that the reconstruction via the Fourier transform will be flawed, and we can get signal allocated to the wrong place, signal replicated where it should not be or signal missing altogether. We see this as distortions or ‘artefacts’ in the final image. As the data is collected over time, the longer the collection time, the more vulnerable the scan is to motion. Data collected in the phase encoding direction (which takes a relatively long time, see my previous post on gradients) is therefore more vulnerable than data collected in the frequency encoding direction. In a later blog post, I will discuss how we can use motion measurements to adjust the spatial frequencies k-space in a way that removes the effect of the movement. This is partly possible due to us not needing the full k-space to do a Fourier transform and get the image, as described above.

Summary. The take home message on k-space is as follows:

  1. K-space is a matrix for data storage with a fancy name.
  2. It holds the spatial frequencies of the entire scanned object (i.e. the variations in brightness for the object).
  3. We can reconstruct the image from this signal using a Fourier transform.
  4. The reconstruction can still work if we sample only parts of k-space, meaning that we can get images in much shorter time.
  5. Problems during data collection (i.e. motion) may cause errors in k-space that appears as ‘artefacts’ in the image after the Fourier transform.

 

Caveat. There is more to k-space than what I have described above, as a medical physicist would tell you. While it is good to have an appreciation of what k-space is and is not, a detailed understanding is generally not needed for those of us simply using MRI in our work rather than working on MRI. This post is meant as a brief(ish) introduction, not a detailed explanation, and contains some simplifications.

MRI: location, location, location

Say you want to give an orange an MRI scan. You pop it in the scanner, apply your radiofrequency pulse, and receive the signal. This signal will differ depending on which part of the orange it comes from, for example whether it is the flesh or the peel. But how would you be able to tell where the different types of signal come from? How would you tell that the peel signal is coming from the surface and the flesh signal from inside? In my previous post, I outlined what happens during an MRI scan. Here, I will (briefly) explain how we can tell which signal comes from which place in the object that we are scanning. This has to do with gradients.

Gradients. Gradients are additional magnetic fields that we apply to specify the location that we want to image. These fields vary slightly from the main magnetic field. The resulting overall field (the main field plus the gradients) now contains variation. Specifically, it changes linearly with distance. Protons will experience different fields depending on where they are in the object, and as a result, they have different frequencies. In other words, we get a different signal from the protons depending on where they are in the object that we image.

We have two types of gradients that we employ: frequency-encoding gradients and phase-encoding gradients.

Frequency encoding is quite simple. We take our original field (B0), add a gradient field (Gf), and we get a new field (Bx) for any given point (x) in space. Like this: Bx = B0 + xGf

The spin frequency of the protons depends on their position along this new field. The frequency at point x is the sum of the frequency induced by the main field, and that induced by the new gradient. We can now differentiate between signal from different parts of the brain to some extent (see image below).

kspace2When we add a frequency encoding gradient (blue) to our main field (green), we can differentiate between signal along this gradient axis (blue arrow). We cannot tell the difference between signal along the other axis (red arrow).

To complete the picture, we need to add phase encoding gradients. We do this by applying another gradient designed to make signal fall out of sync along the phase encoding direction (red arrow axis in the image above). Although the frequency of the signal remains the same, the new gradient will cause it to have a different phase depending on where it comes from (along the red arrow axis). Think of it as a long line of cars waiting at a red light, all in sync. The light turns green, and the first car starts, then the second, then the third, and so on. Now, the cars are out of sync: some are standing still, others are slowly moving, and the first cars are moving at full speed. Also, at the same time, the cars at the back of the queue have become tired of waiting and start reversing (first the one all the way at the back, then the next one, and so on). The full speed profile of the row of cars suddenly has a great deal of variation, with some going forward at different speeds and some going backward at different speeds. In terms of MRI, the shift in phase causes the sum of the frequencies (the sum of the car speeds) along the red arrow axis to vary, and we can use this variation to figure out the individual signals (or individual cars) along the axis.

Through these frequency and phase encoding steps, we can figure out the strength of the signal and location of signal along both axes.

Frequency vs Phase Encoding. An important difference between frequency encoding and phase encoding is speed. Frequency encoding can be done with a single radiofrequency pulse (see my previous blog post on what happens during an MRI). We apply a gradient and a pulse, and all the frequencies can be read all at once. It is fast. For phase encoding, we need to apply a radiofrequency pulse for each phase change. We apply our pulse and then the gradient, let the phase change happen, measure the change, and then we have to apply another gradient and phase change. For a 256 pixel image, we will have to do this 256 times, each taking as long as one TR (repetition time). It is slow. A consequence of this is that the phase encoding is more vulnerable to movement than the frequency encoding.

In my next blog post, I will describe how we store the MRI signal during the scan and how the signal can be transformed into an image.