Appreciation post: ‘junk’ labs

This is a simple appreciation post for junk labs in general and my junk lab in particular. A junk lab arises organically from research that regularly requires new, bespoke equipment with limited funding and a reasonable amount of technical know-how. If this new stuff is built from old stuff, that’s usually easiest. So it’s a lab with a cache of old bits and bobs, scavenged and re-purposed, amended and adjusted. And it looks like this.

 

Advertisements

Smoking in the scanner?

There is a new paper out in Scientific Reports, titled “Investigating the neural correlates of smoking: Feasibility and results of combining electronic cigarettes with fMRI”. This is a study that have managed to combine actual smoking with functional MRI (fMRI).

Most studies looking at brain processing of smoking run into trouble with MRI. This is because smoking and scanning do not go well together. Hospitals don’t allow smoking, things should generally not be on fire in the MRI scanner, and ventilation is an issue when you’re lying in a narrow bore. Because of this, we haven’t been able to properly look at the sensations and behaviour of smoking alongside the effects of nicotine (and other active products in cigarette smoke). This study tries to get around these practical problems and also look at the brain response to real-time smoking.

For the practical part, the study used e-cigarettes. E-cigarettes solve some of the problems with smoking in the scanner (fire and ventilation to some extent), but can cause image artifacts and may also contain metal. The paper shows how smaller types of e-cigarettes did not cause image artifacts plus were safe to use in the scanner from a metal point of view. E-cigarette smoking is a good mimic for traditional smoking, so this is a workable model of ‘the real thing’ that fits with MRI.

In terms of brain responses, the authors found activation in several brain regions associated with smoking e-cigarettes. These regions included motor cortex, insula, cingulate, amygdala, putamen, thalamus, globus pallidus and cerebellum. There were also (relative) deactivations in the ventral striatum and orbitofrontal cortex associated with smoking.

wall_paper

Image from the paper showing brain responses when participants were instructed to smoke. Red-yellow is activation and blue is deactivation.

Some of this activation is (unsurprisingly) linked to movement. The motor cortex activation (stronger on the left hand side, which correspond to right-hand side motion) is most likely due to movements associated with smoking. Similarly, cerebellar activation is often related to motion. Other regions are associated more with the effects of smoking. The putamen is part of a brain region called the striatum, which plays a role in reward and in supporting addiction. The ventral striatum (and orbitofrontal cortex) are associated with drug craving.

From a personal point of view, having worked a great deal with breathing, I am excited that the paper showed activation in the insula and cingulate. Both are structures involved in breathing and breathlessness tasks. However, without behavioural measures to link the findings to, it is hard to say what this activation means in this setting. It is important to remember that just because a similar activation pattern occurs with two different tasks, it doesn’t necessarily follow that the activation means the same. Each region of the brain typically handle more than one thing, particularly cortical regions.

The authors also found that activation patterns was similar both if the participants were told when to smoke and when to stop (first scan), and if they could smoke at will (second scan). However, in the second scan, the activation was weaker. The authors suggest that this could be because this task was more variable, meaning more between-subject variance and poorer timing (from a fMRI point of view). It could also be an order effect, as the subjects had more nicotine in their system in the second scan. This fits with lower activation in reward-related brain regions in the second scan. Or it could simply be because to smoke on command or whenever one wants to are different situations. Again, it is hard to tell why without other measures.

Nevertheless, this is an interesting paper, both from a methods point of view and for those interested in smoking processing and effects on the brain. It’s also written in a nice and easily accessible way. I’d recommend looking it up:

Reference: Matthew B. Wall, Alexander Mentink, Georgina Lyons, Oliwia S. Kowalczyk, Lysia Demetriou & Rexford D. Newbould. Investigating the neural correlates of smoking: Feasibility and results of combining electronic cigarettes with fMRI. Scientific Reports 7, Article number: 11352 (2017)
DOI: 10.1038/s41598-017-11872-z
Website: https://www.nature.com/articles/s41598-017-11872-z 

Art+Science

I have been thinking about art lately. The University of Sheffield’s annual Festival of Academic Writing is coming up and I have just finished reviewing a few papers and grant applications. Mostly, this has made me consider how scientists can suffocate enthusiasm for even the most exciting finding in a single passive paragraph (never mind a whole string of them), and I may do a post about the horrors of the ‘academese’ language at a later stage. However, as the planning for next year’s Festival of the Mind is also underway, I wanted to write about something more positive: how art and science can work together.

Science and art to me are two sides of the same coin. Both aim to understand and describe the world, each in their own way. Science is the more objective of the two, but the way we gather and interpret data is without doubt influenced by our assumptions and world view. Art, on the other hand, challenges these assumptions, offers new ways of looking at the world. In short, science can answer our questions, but art may just help us ask the right ones. We need both to progress.

I  have a great deal of time for art barging into the halls of data and analysis (or for that matter, science picking up the brushes and the paint). Either way, it is a bold move, and the results could be equally impressive. As a prime example on how art and science can work together, watch this video by Jan Fröjdman, who has painstakingly pieced together still images of Mars (from the HiRISE camera) to generate a representation of a ‘live’ flight over the red planet. It is an absolutely stunning interpretation of data.

A FICTIVE FLIGHT ABOVE REAL MARS by Jan Fröjdman (see Vimeo for image credit).

Sheffield has a thriving art scene, and one that is not frightened of interacting with the sciences. The before-mentioned annual Festival of Academic Writing lets academics write creative pieces for the Journal of Imaginary Research and poke fun at the near-obligatory passive voice to their little hearts’ content. I very much enjoyed taking part last year and have just signed up for the upcoming November workshop. There are plenty of art installations throughout both the year and the city that communicate hot-off-the-press research to the public alongside more traditional science outreach events. The winter gardens is a frequently used venue and a good place to go for a bit of lateral thinking. There are artists who specialise in the communication of science and medicine, for example through the live-drawing of conferences, and who manage to reduce complex concepts to easily-interpreted visuals. And how about the live art-rock soundtrack to footage from the Hubble Telescope (plus three Georges Méliès films for good measure)? These are excellent ways of communicating science, as well as perhaps offering up new ways of looking at old questions.

Above are examples of Sheffield’s Art+Science scene (all images reproduced with permission). At the top left, there is Luke Jerram’s giant inflatable E. Coli hovering in the Winter Gardens, which surely had the potential to inspire both budding microbiologists and nightmares during KrebsFest 2015. Top right is artist Kate Sully‘s excellent work for The Journey of Reproductive Life from the University of Sheffield’s 2016 Festival of the Mind. The 2018 Festival is already being planned, with the involvement of animators, musicians, visual and digital artists, dancers and performers. Below Sully’s art is the cover of the Journal of Imaginary Research, vol 2 (2017), with work on volume 3 starting in November 2017 for publication in early 2018. Bottom left is a beautiful representation of signal in the ovary created by Isam Sharum, Felicity Tournant and Sofia Granados-Aparici (Ovary Research Group, 2015) – one of several pieces of science-inspired art within the University of Sheffield and one which I get to admire every morning.

Art has a lot to offer the sciences. Of course, if one want to take the mercenary view, a collaboration with the arts would probably fit the parameters for an impact case for the Research Excellence Framework. But aside from the REF, art can be used to communicate science, to inspire curiosity, to guide our questions and to make a whole generation dream of jet packs and rocket ships.

 

Reblog: “We need to stop calling professional development a “pipeline”

Excellent post by Small Pond Science on why ‘pipeline’ is a problematic metaphor for the scientific career path.

When we talk about increasing the representation of women and ethnic minorities in STEM, the path towards a professional career is often characterized as a “pipeline.” The pipeline metaphor is so entrenched, it affects how people think about our deep-rooted problems. This metaphor has become counterproductive, because it fails to capture the nature of the […]

read the rest of We need to stop calling professional development a “pipeline” at Small Pond Science

Pulmonary rehab: changing the signal

Pulmonary rehabilitation is one of the most effective treatments for breathlessness in chronic obstructive pulmonary disease (COPD), yet its effect is variable. While up to 60% of patients who complete a course of treatment see an improvement, that leaves 40% that do not. Understanding why it works for some and not for others can help personalise and improve treatment for COPD. This is what we’ve focused on in our most recent paper (preprint here) that will be published in the European Respiratory Journal. UPDATE: final published paper here.

A bit of background on how sensations are perceived. When we feel a sensation, our brains often both register and modulate the sensory information from the body. In fact, our sensory perception is probably quite dependent on how the brain processes incoming sensory information. This is influenced by what the brain thinks will happen and why. It is thought that previous experiences (called priors) create expectations in the brain about sensations, and that these are updated whenever the brain receives actual sensory information.

Below is a quote from a paper on how priors influence pain perception by Geuter et al. [1], explaining the concept so clearly I decided to reproduce it in its entirety:

All over the human body, there are receptors that help to alert the brain to potential harm. For example, intense heat on the skin elicits a signal that travels to the brain and activates many parts of the brain. Some of the same brain regions that are switched on by signals of potential bodily harm also help the brain to form expectations about events. A person’s expectations may have a strong influence on how they experience pain. For example, if a person expects that taking a pill will reduce their pain, they may feel less pain even if the pill is a fake.

Exactly how the brain processes pain signals and expectations remains unclear. Does the brain activity simply reflect how intense the heat is? Some scientists think there may be two separate processes going on: one that predicts what will happen and another that calculates the difference between the prediction and what the receptors actually detect. This difference is called a prediction error. If every unpredicted sensory signal elicits a calculation of the prediction error that would help improve the brain’s future predictions.

This system is open to manipulation. There are many factors that can adjust these priors or weight the incoming sensory information, causing the person to over- or under-perceive sensations. For example, anxiety and attentional bias may cause over-perception of sensations.

But how? How does this relate to COPD? In COPD, a prior may be formed linking shortness of breath to physical activity, for example climbing stairs. This prior, if bolstered by for example anxiety and attentional bias, may begin to dominate and cause over-perception of breathlessness. This means that breathlessness perception would be governed more by the prior and the anxiety/fear than by the input from the body. In this example, a simple flight of stairs become a cue for the brain to access its priors, which means generating an expectation of breathlessness and anxiety, all because that is what previous experiences have demonstrated will happen.

Pulmonary rehabilitation, however, challenges these priors. Rehabilitation makes the patient face their breathlessness, but in a safe healthcare setting. This may change the patient’s priors and how they process breathlessness-related cues. If this is the case, we may expect that patients with different priors show different treatment outcome, and we may expect that patients show a different response to cues after treatment than before.

But where? We know that predictions about bodily state and emotion (i.e. priors) are typically generated in a stimulus valuation network. This network consists of many brain regions, including the anterior insula, anterior cingulate cortex (ACC), orbitofrontal cortex and ventromedial prefrontal cortex. There are also more ‘downstream’ regions associated with breathing, including the posterior insula, which process incoming respiratory sensory information. These are responsible for sending sensory information from the body to other parts of the brain (both those dealing with the physical sensation and those processing the emotional impact such as the stimulus valuation network). The posterior insula, along with regions such as the angular gyrus and the supramarginal gyrus, are involved with how much attention a physical sensation gets. All of these regions might be likely places where pulmonary rehabilitation would change activation patterns.

What we did. We recruited 31 people with COPD and studied them before and after pulmonary rehabilitation. On each visit, we did the same tests: we collected a set of behavioural questionnaires (of which we used one, the Dyspnoea-12 [2], as our main measure of breathlessness); we did a lung function and an exercise test; and we did a functional brain scan (FMRI) to test their brain activity while they were looking at (and rating) breathlessness-related cues for anxiety (How anxious would this make you feel?”) and breathlessness (“How breathless would this make you feel?”). *

Behavioural changes. The ratings of the patients were overall much lower for anxiety after rehabilitation, and this correlated with the main measure of breathlessness (Dyspnoea-12). The correlation was influenced by changes in depression in our patients, although we don’t know whether it is the depression that influences anxiety and breathlessness, the anxiety that influences depression and breathlessness, or the breathlessness that influences anxiety and depression. It may easily be that all of these factors influence each other. We do, however, know that they are linked. The figure below (Fig 1) shows how all the behavioural and physiological measures are correlated.

prepost1bFig 1. Correlation matrices of the measured behavioural variables. Abbreviations: wA, cue ratings of anxiety; wB, cue rating of breathlessness; StG, St Georges Respiratory score; Cat, Catastrophising score; Vig, Vigilance/Awareness score; Dep, Depression score; T Anx, Trait anxiety; S Anx, State Anxiety; Fat, Fatigue; BisBas, inhibition/activation scale; Spir, lung function (FEV1/FVC); ISWT, exercise ability (incremental shuttle walk test).**

While rehabilitation worked for the group as a whole, we saw that there was variability in the treatment response between patients. There was also no improvement in breathlessness ratings, nor was there any change in lung function in the group. Lung function was not linked to any of the behavioural measures, meaning that it isn’t a good measure of the impact of breathlessness in COPD.

Brain changes. Then we looked at how variation in brain activity explained the variation in our patients’ ratings of the cues over the course of their treatment. By looking at how variation in brain activity follows variation in ratings, we could make sure that even the patients that didn’t respond normally to treatment were included in the analysis. In other words, if a patient didn’t respond it is likely that their brain activation would not change either, and if a patient got worse we might see that their brain activation went in a different direction from those that got better. This gives us a much stronger idea of which areas get upregulated and downregulated (or stays the same) with successful treatment.

Looking at this variation, we saw that reduced breathlessness was linked with less activation in some brain regions (the anterior insula, ACC, posterior insula and supramarginal gyrus). This is a dampening in activity in brain areas handling expectations of breathlessness, and it could mean that successful treatment works by making patients re-evaluate their priors. Reduced anxiety was linked with greater activation in a slightly different set of brain regions (the posterior cingulate cortex, angular gyrus, primary motor cortex and supramarginal gyrus). As a set, these are involved with how much attention a physical sensation gets, and may be dampened by anxiety. In other words, if you are anxious, it is difficult to regulate how much attention you give a thing (i.e. if you are scared of spiders, you can’t just ignore one if you see one). So when we see an increase in these regions, this may mean that the patients are less anxious and more able to regulate attention. Taken together, this suggests that our patients had a more objective processing of breathlessness cues and were less dominated by their priors after rehabilitation.

prepost1.jpgFig 2. Change in brain activity that fits with rehabilitation-induced changes in response to breathlessness cues (both for anxiety and for breathlessness). Blue colours mean lower brain activity, and red/yellow colours mean higher brain activity. **

Predicting treatment outcome. We also looked at whether brain activation before the treatment could predict who would benefit from the treatment and who would not. Several regions showed higher activation in those patients who went on to improve with treatment. These included the stimulus valuation network plus the primary motor cortex. Improvements in anxiety ratings were predicted by high activation in the ACC and ventromedial prefrontal cortex, which overlaps with one of our previous studies looking at breathlessness and anxiety in COPD patients versus healthy controls [3]. These findings are also supported by a study that showed how higher fear levels before pulmonary rehabilitation tends to mean a greater response to treatment [4].

prepost3Fig 3. Brain activity before treatment that is linked with treatment outcome, both in terms of breathlessness (top) and anxiety (bottom). **

To conclude. Pulmonary rehabilitation seems to lead to reduced activity in the brain’s stimulus valuation network and increased activity in attention regulating networks. Those with strong responses in the stimulus valuation network before pulmonary rehabilitation typically see a bigger reduction in their responses to breathlessness cues after treatment. It may be that pulmonary rehabilitation works both by updating breathlessness-related priors and by reducing feelings of depression and anxiety that typically influence sensory processing. *** If this is the case, then we could improve treatment by focusing on re-learning priors, either by using drugs or alternative behavioural therapies. We could also use MRI as a way of developing behavioural tests (questionnaires, computerised tasks) that can be used to figure out who will benefit the most and in which way from the treatment.

References:
[1] Geuter, S. et al. eLife 2017; 6:e24770
[2] Yorke, J. et al. Thorax 2010; 65: 21-26
[3] Herigstad, M. et al. Chest 2015; 148(4): 953-961
[4] Janssens, T. et al. Chest 2011; 140: 618-625

Footnotes:
*The FMRI analysis used standard significant thresholds (cluster Z = 2.3, corrected cluster p = 0.05 corrected for multiple comparisons across the whole brain).
**Adapted from Herigstad et al, 2017, biorxiv: https://doi.org/10.1101/117390. The copyright holder for this preprint is the author/funder. It is made available under a CC-BY 4.0 International license
***In addition to potential improvements in fitness. We did see an increase in exercise capacity, even if none of the measured baseline physiological variables were changed, and it is possible that the rehabilitation causes the patients to become healthier and stronger.

Link: 
The paper is available from here: http://biorxiv.org/content/early/2017/03/23/117390
DOI: https://doi.org/10.1101/117390

The published paper is out here

Full citation: Mari Herigstad, Olivia K. Faull, Anja Hayen, Eleanor Evans, F. Maxine Hardinge, Katja Wiech, Kyle T.S.Pattinson. Treating breathlessness via the brain: changes in brain activity over a course of pulmonary rehabilitation. 

The Journal of Imaginary Research

In November last year I had the opportunity to participate in a creative writing workshop as part of WriteFest 2016 at the University of Sheffield. The challenge was to write a mock academic abstract and academic profile based on a random picture from the research of one of the other workshop participants, and the final result was to be published in the Journal of Imaginary Research. The picture I received was a collage of box files, decorated with quotes and images, and with holes cut in the front. I have no idea whose picture it was or what the research was about. The workshop group consisted of quite a few representatives from the humanities, so my guess is it came from one of them rather than the measly two or three other STEM students/employees. In any case, it was a fun experience, and I thought I’d share my abstract.

JIR_abstract

The full issue can be found here: the Journal of Imaginary Research, volume 2

jir

And just in case anyone wondered, this was the picture I submitted from my own research:nsbw

MRI and motion correction

Magnetic resonance imaging is sensitive to motion. Just like with other images, movement may cause blurring and distortion (‘artefacts’). To counteract this, motion correction methods are often used. These include devices that track motion as well as software that can correct some of the artefacts after the images have been collected. We have just published a paper on a potential new way to do this, using a wireless accelerometer (link, open access [1]), so here is a quick blog post about motion and MRI, explaining some of our findings along the way.

technologies-05-00006-g001The GE Firefly scanner, 3T

One of the reasons for doing this work is that we are using a new scanner for newborn babies. Motion is always an issue in MRI, even for adults, but the scanning of newborns may be particularly vulnerable. It is not always easy to convince a newborn baby to remain still. Newborns may move several centimetres where adults only shift a few millimetres. As the newborn is smaller in size, this movement also has greater impact in that it can completely displace the (much smaller) structure of interest. Newborn babies also show differences in physiology compared to adults, which can affect the scan. For example, they breathe faster and less regular, and the resulting motion is transmitted to the head to a greater degree (due to the smaller distance between head and chest) [2].

Types of motion
Motion comes in many types. There is microscopic motion, related to for example circulation of blood or water diffusion, and there is macroscopic motion, related to whole-body movement and physiological functions, for example breathing movement. It may be periodic (e.g. breathing movement), intermittent (e.g. yawns, hiccups) or continuous (e.g. general unsettledness, scanner vibrations). In research settings, noise and motion may be induced by experimental procedures [3]. Motion causes artefacts such as blurring, signal loss, loss of contrast and even the replication of signal in wrong places (‘ghosting’) – all lowering the quality of the image. An example of a motion artefact can be seen in the image below.

wireless_fig2Fast Spin Echo image. Left: no motion artefact; Right: artefact due to in-plane rotational head movement. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In the figure above, there are lines on the right-hand scan (red arrow), which are distortions. These distortions were created because the head rotated slightly whilst it was being scanned. Too much distortion and the image will become less useful for clinical and experimental purposes.

Types of motion correction
There are many types of MRI motion correction. The simplest may often be to prevent and minimise movement using coaching of the patient, sedation and fast and/or motion-resistant imaging protocols. A fast scan with a still individual will usually give very little motion. However, this may not always be possible. Patients do not always lie still, sedation may not always be a good idea, and even our best imaging sequences can be vulnerable to movement to some extent. Large movement is therefore often best tackled through different means: it is detected and corrected for. Correction can be done during the scan (real-time) or after the scan (creation of the image from the raw data and/or post-processing of the image).

There are limits to this type of large movement correction. For example, we can use so-called navigator pulses during the scan to correct for movement in real time, but they tend to make scans take much longer. We can also use tracking devices to correct for motion both during and after a scan, but such devices are limited by the level of motion they can detect and require a fair bit of extra equipment to work inside or interact with the scanner. Finally, we can correct for motion in reconstruction or post-processing, but this too usually takes a lot of time and effort. Which type(s) of correction method is best may differ between different types of scan, patients, and experimental protocols and so on.

In our paper, we used an external motion measuring device – a wireless accelerometer, similar to one that you may buy for fitness purposes – to measure motion of the head. The nice thing about this is that it can give us full real-time 3D motion information about how the head moves. It is not like a visual device, which needs a clear line of sight to be able to ‘observe’ the head at all times. The accelerometer gave us continuous, wireless feedback on the angle of the object being scanned. We could then use this information to adjust the MR data, using a motion correction algorithm. The algorithm, using movement data from the accelerometer, adjusted how the MR signal was recorded at each given time point. We were in short using the signal from the accelerometer to shift k-space.

This meant that shifts in signal due to movement could in theory be recorded and, at least partly, fixed. Conversely, it also meant that motion could be introduced in a motion-free image. To introduce motion, we first made a motion data file with the accelerometer, simply by manually rotating it and recording the angles. We then applied this motion data file to the raw data of a motion-free scan. The motion file was used to shift signal in k-space for each affected phase encode step. Doing this, we could distort the image in the same way that real motion would cause distortions, despite there being no original motion in the MR data. We could ramp this up as we pleased, adding more and more ‘motion’, as shown in the figure below.

wireless_fig3.jpg
MR images incorporating increasing amounts of motion. (a) Original no-motion image, (b-f) motion applied, starting with 2 × 10−2 radians (b) and doubled for each successive image. Image from Paley et al. 2017. Technologies, 5(1); 6. (CC BY 4.0)

In principle, reversal of the motion effects should be possible. The motion in the figure above was introduced using a standard rotation matrix which multiplied the k-space locations by the measured angle, and if we reverse this process (i.e. counter-rotate the k-space data according to the measured angles), removing the noise should be possible. As with most things, it is easier to break than fix, yet we did see a subtle reversal of motion artefacts for a simple side-to-side rotation. This means that a wireless accelerometer may eventually be used to retrospectively correct for motion in neonatal MRI scans. It is also possible that it could be used for guiding real-time correction methods.

References: 
1. Paley, M., Reynolds, S., Ismail, N., Herigstad, M., Jarvis, D. & Griffiths, P. Wireless Accelerometer for Neonatal MRI Motion Artifact Correction. Technologies. 2016; 5(1): 6. doi:10.3390/technologies5010006
2. Malamatenioua, C., Malika, S., Counsella, S., Allsopa, J., McGuinnessa, A., Hayata, T., Broadhousea, K., Nunesa, R., Ederiesc, A., Hajnala, J. & Rutherford, M. Motion-compensation techniques in neonatal and fetal MR imaging. Am J Neuroradiol. 2013; 34(6):1124-36.
3. Hayen, A., Herigstad, M., Kelly, M., Okell, T., Murphy, K., Wise, R., & Pattinson, K. The effects of altered intrathoracic pressure on resting cerebral blood flow and its response to visual stimulation. NeuroImage. 2012; 66: 479-488. doi: 10.1016/j.neuroimage.2012.10.049.

Reblog: “I fell into this by accident”

I planned my career, but not all went exactly as I envisioned. Opportunities emerged, unexpected findings were too interesting not to pursue, new studies arose from old ones, new collaborators came up with exciting suggestions, and pints with colleagues often turned into interesting, half-crazy ideas that somehow worked. As a result, I have worked across several fields (at the peril of not being able to self-cite very often), and gained a great deal of unexpected experience. It has worked out pretty well, if I do say so myself. Sheffield University’s Think Ahead research support group recently published a short blog post on Krumboltz, Levin and Mitchell’s ‘Planned Happenstance’ theory, which resonated with me. This focuses on making the best of everyday opportunities to succeed and contains some valuable suggestions on how to do this. The Think Ahead post is a neat overview of the theory and worth a read.

Career planning is often presented as applying a logical, step by step approach following a linear pattern to develop your career but the reality is that life in general and our career thinking more specifically often don’t work that way. We have a huge number of options open to us and deciding what is the […]

Read the rest of “I fell into this by accident” at the Think Ahead Blog

MRI and k-space

MRI images are created from raw data contained in a raw data space, called k-space. This is a matrix where MR signals are stored throughout the scan. K-space is considered a bit of a tricky topic, so I will only outline a brief explanation of what k-space is and how it relates to the MR image.

Kspace.pngThis is a visualisation of k-space.

K-space is what? The first thing to recognise is k-space is not a real space. It is a mathematical construct with a fancy name – a matrix used to store data. The data points are not simple numbers, they are spatial frequencies. Unlike ‘normal’ frequencies, which are repetitions per time, spatial frequencies are repetitions per distance. They are, in other words, waves in real space, like sound waves. We usually measure them as cycles (or line pairs) per mm. The number of cycles per mm is called the wavenumber, and the symbol used for wavenumber is ‘k’ – hence k-space.

It may be easiest to envision these spatial frequencies as variation in the brightness of the image (or variation in signal spatial distribution, if we are to get more technical). So we have a large matrix of brightness variations, which together makes up the complex image. The matrix is full when the image is completed (i.e. the scan is done).

kspace.jpg

K-space relates to the image how? The second thing to recognise is that k-space corresponds (although not directly) to the image. If the image size is 256 by 256, then the k-space will have 256 columns and 256 rows. This doesn’t mean, however, that the bottom right hand k-space spatial frequency holds the information for the bottom right hand image pixel. Each spatial frequency in k-space contains information about the entire final image. In short, the brightness of any given ‘spot’ in k-space indicates the amount that that particular spatial frequency contributes to the image. K-space is typically filled line by line (but we can also fill it in other ways).

Frequencies across k-space. Typically, the edges of k-space has high spatial frequency information compared to the centre. Higher spatial frequency gives us better resolution, and lower spatial frequencies better contrast information. Think of it like this: when you have abrupt changes in the image, you also get abrupt variations in brightness, which means high spatial frequencies. No abrupt changes means lower spatial frequencies. This in effect means that the middle of k-space contain one type of information about the image (contrast), and the edges contain another type of information (resolution/detail). If you reconstruct only the middle of k-space, you get all the contrast/signal (but no edges) – like a watercolour – and if you reconstruct only the edges of k-space, you get all edges (but no contrast) – like a line drawing. Put them together, however, we get all the information needed to create an image.

watercolour-1578069_960_720.jpg

Transformation: Notes and chords. The route from k-space to image is via a Fourier transform. The idea behind a Fourier transform is that any waveform signal can be split up in a series of components with different frequencies. A common analogy is the splitting up of a musical chord into the frequencies of its notes. Like notes, every value in k-space represents a wave of some frequency, amplitude and phase. All the separate bits of raw signal held in k-space (our notes) together can be transformed into the final MRI image (our chord, or perhaps better: the full tune). One important aspect of the Fourier transform is that it is not dependent on a ‘full’ k-space. We may leave out a few notes and still get the gist of the music, so to speak.

Cutting the k-space. We have types of scan that only collect parts of k-space. These are fast imaging sequences, but they have less signal to noise. Less signal to noise means that the amount of signal from the object being measured is reduced compared to the noise that we encounter when we image the object. We always get a certain amount of noise when imaging: from the scanner, the environment and sometimes also the object being imaged. A low signal to noise ratio is typically bad for image quality. Nevertheless, we can get away with collecting fewer lines in k-space when we image something relatively symmetrical, like a human brain. This allows us to mathematically fill in the missing lines in k-spaces, from an assumption of symmetry. Some fast scan sequences sample only every second line in k-space, and we can still reconstruct the image from the information gathered.

The problem of motion. If there was any motion during the scan, there will be inconsistencies in k-space. The scanner does not ‘know’ that the object moved, so it tries to reconstruct the image from data that is not consistent. This means that the reconstruction via the Fourier transform will be flawed, and we can get signal allocated to the wrong place, signal replicated where it should not be or signal missing altogether. We see this as distortions or ‘artefacts’ in the final image. As the data is collected over time, the longer the collection time, the more vulnerable the scan is to motion. Data collected in the phase encoding direction (which takes a relatively long time, see my previous post on gradients) is therefore more vulnerable than data collected in the frequency encoding direction. In a later blog post, I will discuss how we can use motion measurements to adjust the spatial frequencies k-space in a way that removes the effect of the movement. This is partly possible due to us not needing the full k-space to do a Fourier transform and get the image, as described above.

Summary. The take home message on k-space is as follows:

  1. K-space is a matrix for data storage with a fancy name.
  2. It holds the spatial frequencies of the entire scanned object (i.e. the variations in brightness for the object).
  3. We can reconstruct the image from this signal using a Fourier transform.
  4. The reconstruction can still work if we sample only parts of k-space, meaning that we can get images in much shorter time.
  5. Problems during data collection (i.e. motion) may cause errors in k-space that appears as ‘artefacts’ in the image after the Fourier transform.

 

Caveat. There is more to k-space than what I have described above, as a medical physicist would tell you. While it is good to have an appreciation of what k-space is and is not, a detailed understanding is generally not needed for those of us simply using MRI in our work rather than working on MRI. This post is meant as a brief(ish) introduction, not a detailed explanation, and contains some simplifications.

MRI: location, location, location

Say you want to give an orange an MRI scan. You pop it in the scanner, apply your radiofrequency pulse, and receive the signal. This signal will differ depending on which part of the orange it comes from, for example whether it is the flesh or the peel. But how would you be able to tell where the different types of signal come from? How would you tell that the peel signal is coming from the surface and the flesh signal from inside? In my previous post, I outlined what happens during an MRI scan. Here, I will (briefly) explain how we can tell which signal comes from which place in the object that we are scanning. This has to do with gradients.

Gradients. Gradients are additional magnetic fields that we apply to specify the location that we want to image. These fields vary slightly from the main magnetic field. The resulting overall field (the main field plus the gradients) now contains variation. Specifically, it changes linearly with distance. Protons will experience different fields depending on where they are in the object, and as a result, they have different frequencies. In other words, we get a different signal from the protons depending on where they are in the object that we image.

We have two types of gradients that we employ: frequency-encoding gradients and phase-encoding gradients.

Frequency encoding is quite simple. We take our original field (B0), add a gradient field (Gf), and we get a new field (Bx) for any given point (x) in space. Like this: Bx = B0 + xGf

The spin frequency of the protons depends on their position along this new field. The frequency at point x is the sum of the frequency induced by the main field, and that induced by the new gradient. We can now differentiate between signal from different parts of the brain to some extent (see image below).

kspace2When we add a frequency encoding gradient (blue) to our main field (green), we can differentiate between signal along this gradient axis (blue arrow). We cannot tell the difference between signal along the other axis (red arrow).

To complete the picture, we need to add phase encoding gradients. We do this by applying another gradient designed to make signal fall out of sync along the phase encoding direction (red arrow axis in the image above). Although the frequency of the signal remains the same, the new gradient will cause it to have a different phase depending on where it comes from (along the red arrow axis). Think of it as a long line of cars waiting at a red light, all in sync. The light turns green, and the first car starts, then the second, then the third, and so on. Now, the cars are out of sync: some are standing still, others are slowly moving, and the first cars are moving at full speed. Also, at the same time, the cars at the back of the queue have become tired of waiting and start reversing (first the one all the way at the back, then the next one, and so on). The full speed profile of the row of cars suddenly has a great deal of variation, with some going forward at different speeds and some going backward at different speeds. In terms of MRI, the shift in phase causes the sum of the frequencies (the sum of the car speeds) along the red arrow axis to vary, and we can use this variation to figure out the individual signals (or individual cars) along the axis.

Through these frequency and phase encoding steps, we can figure out the strength of the signal and location of signal along both axes.

Frequency vs Phase Encoding. An important difference between frequency encoding and phase encoding is speed. Frequency encoding can be done with a single radiofrequency pulse (see my previous blog post on what happens during an MRI). We apply a gradient and a pulse, and all the frequencies can be read all at once. It is fast. For phase encoding, we need to apply a radiofrequency pulse for each phase change. We apply our pulse and then the gradient, let the phase change happen, measure the change, and then we have to apply another gradient and phase change. For a 256 pixel image, we will have to do this 256 times, each taking as long as one TR (repetition time). It is slow. A consequence of this is that the phase encoding is more vulnerable to movement than the frequency encoding.

In my next blog post, I will describe how we store the MRI signal during the scan and how the signal can be transformed into an image.