MRI and k-space

MRI images are created from raw data contained in a raw data space, called k-space. This is a matrix where MR signals are stored throughout the scan. K-space is considered a bit of a tricky topic, so I will only outline a brief explanation of what k-space is and how it relates to the MR image.

Kspace.pngThis is a visualisation of k-space.

K-space is what? The first thing to recognise is k-space is not a real space. It is a mathematical construct with a fancy name – a matrix used to store data. The data points are not simple numbers, they are spatial frequencies. Unlike ‘normal’ frequencies, which are repetitions per time, spatial frequencies are repetitions per distance. They are, in other words, waves in real space, like sound waves. We usually measure them as cycles (or line pairs) per mm. The number of cycles per mm is called the wavenumber, and the symbol used for wavenumber is ‘k’ – hence k-space.

It may be easiest to envision these spatial frequencies as variation in the brightness of the image (or variation in signal spatial distribution, if we are to get more technical). So we have a large matrix of brightness variations, which together makes up the complex image. The matrix is full when the image is completed (i.e. the scan is done).

kspace.jpg

K-space relates to the image how? The second thing to recognise is that k-space corresponds (although not directly) to the image. If the image size is 256 by 256, then the k-space will have 256 columns and 256 rows. This doesn’t mean, however, that the bottom right hand k-space spatial frequency holds the information for the bottom right hand image pixel. Each spatial frequency in k-space contains information about the entire final image. In short, the brightness of any given ‘spot’ in k-space indicates the amount that that particular spatial frequency contributes to the image. K-space is typically filled line by line (but we can also fill it in other ways).

Frequencies across k-space. Typically, the edges of k-space has high spatial frequency information compared to the centre. Higher spatial frequency gives us better resolution, and lower spatial frequencies better contrast information. Think of it like this: when you have abrupt changes in the image, you also get abrupt variations in brightness, which means high spatial frequencies. No abrupt changes means lower spatial frequencies. This in effect means that the middle of k-space contain one type of information about the image (contrast), and the edges contain another type of information (resolution/detail). If you reconstruct only the middle of k-space, you get all the contrast/signal (but no edges) – like a watercolour – and if you reconstruct only the edges of k-space, you get all edges (but no contrast) – like a line drawing. Put them together, however, we get all the information needed to create an image.

watercolour-1578069_960_720.jpg

Transformation: Notes and chords. The route from k-space to image is via a Fourier transform. The idea behind a Fourier transform is that any waveform signal can be split up in a series of components with different frequencies. A common analogy is the splitting up of a musical chord into the frequencies of its notes. Like notes, every value in k-space represents a wave of some frequency, amplitude and phase. All the separate bits of raw signal held in k-space (our notes) together can be transformed into the final MRI image (our chord, or perhaps better: the full tune). One important aspect of the Fourier transform is that it is not dependent on a ‘full’ k-space. We may leave out a few notes and still get the gist of the music, so to speak.

Cutting the k-space. We have types of scan that only collect parts of k-space. These are fast imaging sequences, but they have less signal to noise. Less signal to noise means that the amount of signal from the object being measured is reduced compared to the noise that we encounter when we image the object. We always get a certain amount of noise when imaging: from the scanner, the environment and sometimes also the object being imaged. A low signal to noise ratio is typically bad for image quality. Nevertheless, we can get away with collecting fewer lines in k-space when we image something relatively symmetrical, like a human brain. This allows us to mathematically fill in the missing lines in k-spaces, from an assumption of symmetry. Some fast scan sequences sample only every second line in k-space, and we can still reconstruct the image from the information gathered.

The problem of motion. If there was any motion during the scan, there will be inconsistencies in k-space. The scanner does not ‘know’ that the object moved, so it tries to reconstruct the image from data that is not consistent. This means that the reconstruction via the Fourier transform will be flawed, and we can get signal allocated to the wrong place, signal replicated where it should not be or signal missing altogether. We see this as distortions or ‘artefacts’ in the final image. As the data is collected over time, the longer the collection time, the more vulnerable the scan is to motion. Data collected in the phase encoding direction (which takes a relatively long time, see my previous post on gradients) is therefore more vulnerable than data collected in the frequency encoding direction. In a later blog post, I will discuss how we can use motion measurements to adjust the spatial frequencies k-space in a way that removes the effect of the movement. This is partly possible due to us not needing the full k-space to do a Fourier transform and get the image, as described above.

Summary. The take home message on k-space is as follows:

  1. K-space is a matrix for data storage with a fancy name.
  2. It holds the spatial frequencies of the entire scanned object (i.e. the variations in brightness for the object).
  3. We can reconstruct the image from this signal using a Fourier transform.
  4. The reconstruction can still work if we sample only parts of k-space, meaning that we can get images in much shorter time.
  5. Problems during data collection (i.e. motion) may cause errors in k-space that appears as ‘artefacts’ in the image after the Fourier transform.

 

Caveat. There is more to k-space than what I have described above, as a medical physicist would tell you. While it is good to have an appreciation of what k-space is and is not, a detailed understanding is generally not needed for those of us simply using MRI in our work rather than working on MRI. This post is meant as a brief(ish) introduction, not a detailed explanation, and contains some simplifications.

MRI: location, location, location

Say you want to give an orange an MRI scan. You pop it in the scanner, apply your radiofrequency pulse, and receive the signal. This signal will differ depending on which part of the orange it comes from, for example whether it is the flesh or the peel. But how would you be able to tell where the different types of signal come from? How would you tell that the peel signal is coming from the surface and the flesh signal from inside? In my previous post, I outlined what happens during an MRI scan. Here, I will (briefly) explain how we can tell which signal comes from which place in the object that we are scanning. This has to do with gradients.

Gradients. Gradients are additional magnetic fields that we apply to specify the location that we want to image. These fields vary slightly from the main magnetic field. The resulting overall field (the main field plus the gradients) now contains variation. Specifically, it changes linearly with distance. Protons will experience different fields depending on where they are in the object, and as a result, they have different frequencies. In other words, we get a different signal from the protons depending on where they are in the object that we image.

We have two types of gradients that we employ: frequency-encoding gradients and phase-encoding gradients.

Frequency encoding is quite simple. We take our original field (B0), add a gradient field (Gf), and we get a new field (Bx) for any given point (x) in space. Like this: Bx = B0 + xGf

The spin frequency of the protons depends on their position along this new field. The frequency at point x is the sum of the frequency induced by the main field, and that induced by the new gradient. We can now differentiate between signal from different parts of the brain to some extent (see image below).

kspace2When we add a frequency encoding gradient (blue) to our main field (green), we can differentiate between signal along this gradient axis (blue arrow). We cannot tell the difference between signal along the other axis (red arrow).

To complete the picture, we need to add phase encoding gradients. We do this by applying another gradient designed to make signal fall out of sync along the phase encoding direction (red arrow axis in the image above). Although the frequency of the signal remains the same, the new gradient will cause it to have a different phase depending on where it comes from (along the red arrow axis). Think of it as a long line of cars waiting at a red light, all in sync. The light turns green, and the first car starts, then the second, then the third, and so on. Now, the cars are out of sync: some are standing still, others are slowly moving, and the first cars are moving at full speed. Also, at the same time, the cars at the back of the queue have become tired of waiting and start reversing (first the one all the way at the back, then the next one, and so on). The full speed profile of the row of cars suddenly has a great deal of variation, with some going forward at different speeds and some going backward at different speeds. In terms of MRI, the shift in phase causes the sum of the frequencies (the sum of the car speeds) along the red arrow axis to vary, and we can use this variation to figure out the individual signals (or individual cars) along the axis.

Through these frequency and phase encoding steps, we can figure out the strength of the signal and location of signal along both axes.

Frequency vs Phase Encoding. An important difference between frequency encoding and phase encoding is speed. Frequency encoding can be done with a single radiofrequency pulse (see my previous blog post on what happens during an MRI). We apply a gradient and a pulse, and all the frequencies can be read all at once. It is fast. For phase encoding, we need to apply a radiofrequency pulse for each phase change. We apply our pulse and then the gradient, let the phase change happen, measure the change, and then we have to apply another gradient and phase change. For a 256 pixel image, we will have to do this 256 times, each taking as long as one TR (repetition time). It is slow. A consequence of this is that the phase encoding is more vulnerable to movement than the frequency encoding.

In my next blog post, I will describe how we store the MRI signal during the scan and how the signal can be transformed into an image.

What happens during an MRI?

MRI is complex, but the basic events in the scanner are quite straight-forward. Below is a short, simple guide to what happens during an MRI scan without too much physics to complicate matters. It explains the actual events in the scanner and a simplified overview of the parameters we can use to change the images we collect.

technologies-05-00006-g001

  1. First we align the protons in the object we want to scan with the magnetic field of the scanner (B0). This happens naturally when the object is placed into the scanner, as the slightly magnetic protons adjust themselves to match the magnetic field of the scanner.
  2. Second, we use a radio frequency (RF) pulse to ‘tip’ these aligned protons out of their alignment with the magnetic field. This pulse is sometimes called B1 and is applied at a 90° angle to B0. This causes the protons to rotate, and we can measure this rotation with RF measurement coils. RF coils are essentially loops of wire, and the changing magnetic flux of the protons induces an electric current through these loops. This is because changes in electrical currents generate magnetic fields, and changes in magnetic fields generate electrical currents. RF coils may often be designed to both deliver the RF pulse (through an applied electrical current) and receive the signal (through the resulting change in magnetic flux). During the rotation, two things happen.
    1. Protons begin to align themselves again with B0. The speed of this realignment is called the T1 relaxation time. For any given type of object (or tissue, if we are doing medical imaging), the composition of the object will cause its protons to realign at different rates. Faster realignment means brighter signal.
    2. Protons become out of phase with eachother. This reduces the signal we can measure with our coil (as the rotation of the photons are no longer ‘pulling in the same direction’). The speed at which this happens is called T2 relaxation time. Protons remaining in phase for longer means brighter signal.
  3. The RF pulse is applied again, to repeat the procedure. The average of all these repeats gives us a clear MR image.

The contrasts of the scan (T1, T2) are determined by two parameters: relaxation time (TR) and echo time (TE).

TR is the time between the RF pulses. If we have a long TR, all protons in the object have time to realign with B0. If we have a short TR, some protons may not have fully realigned by the time the next RF pulse arrives. In terms of medical imaging, some tissues will need longer to have all their protons realign than other tissues. If the ‘slow’ tissues have not realigned within the TR, the signal from these tissues will be less than the ‘fast’ tissues. This way, we can tell the difference between different tissues.

TE is the time we use to measure the signal induced by the rotating protons. Some types of tissue will have protons that fall out of phase (‘dephase’) faster than protons in other types of tissue. For example, protons that are in fluids have less obstacles, and will remain in phase for quite a long time. Protons that are constrained by structures may not remain in phase that long. Longer TEs means that the protons have more time to dephase, and this will reduce the signal from tissues that dephase quickly more than from tissues that dephase slowly.

Generally speaking, we have three types of contrast: T1-weighted, T2-weighted and proton-density (PD) weighted

A scan sequence with a short TR and short TE is usually called T1-weighted. By ‘a short TE’ we usually mean that the TE is shorter than the T2. In other words, there is not enough time for the protons dephasing properly, and the T2 effects are masked. The shorter TR, on the other hand, means we can easily differentiate between tissues with longer and shorter T1. The scan is therefore T1-weighted. Tissues that are bright in T1-weighted scans are fat and white brain matter. Muscle and grey brain matter are less bright (grey in colour), and fluids tend to be black.

A long TR and long TE scan sequence is usually called T2-weighted. Longer TE means that we get differentiation based on protons dephasing at different rates, and the T2 effects are visible. The longer TR, however, means all tissues have time to have all their protons realigned to B0, so we get no differentiation based on T1 times. The scan is therefore T2-weighted. Tissues that are bright in T2-weighted scans are fat and fluids. Muscle and grey brain matter are grey, and white brain matter is almost black.

A long TR and a short TE means we get both T1 differentiation and T2 differentiation, and we call this proton-density weighted. This gives us the actual density of protons in the tissues. A short TR and long TE means we get neither T1 nor T2 differentiation. We don’t use this type of scan, as it doesn’t yield any useful information.

T1t2PD.jpg

In this post, I have summarised some of the basics about MR imaging. In my next post, I will move on to outline some of the basics about raw MR data processing, covering k-space and Fourier transforms.