Analysing brain blood flow

In a previous post I described how to use ultrasound to measure brain blood flow. So, the data has been gathered. To recap, this is what it looks like:


Now what? How do we analyse this? Some ultrasound machines have their own analysis software that can be used to determine whether the blood flow is normal. Usually, it requires training and experience to interpret ultrasound results for diagnostic or healthcare purposes, even with custom software. For research purposes, however, things are a bit different. The custom software may not always be what we are looking for, and diagnosis may not always be the end-game. In my research, I analyse beat-by-beat blood flow velocity in healthy participants, looking how this changes over time. I have used custom Matlab scripts to analyse these, using quite simple steps:

  1. Export the data in .dat or .txt format. This will depend on the software, and I have found it easiest to work with machines that allow direct (during data collection) connection with a computer, feeding into Spike, LabChart or similar programs, as I know how to export data from such software without any fuss.
  2. Read the data file in MATLAB (or similar), then follow these processing steps
    1. Visualise the data. I always look at my data before I start working on it. This helps figuring out the quality, and making sure I’ve measured what I think I have measured.
    2. Smooth the data. No data is perfect, and TCD data can be quite noisy for a number of reasons. While smoothing helps getting rid of some of these issues, I prefer to smooth as little as possible.
    3. Remove any gaps. There may be gaps in the data file where the probe had to be shifted for gel application or similar. These can be removed manually or you can use a threshold (lower and higher, outside of the physiological range). I prefer to automate as much as possible, both because I don’t want the hassle and I want to avoid the potential bias that comes with manual changes. I tend to visualise the datafile whenever using thresholds, to make sure the data looks fine, and I use the same threshold for all subjects.
    4. Remove extra noise? If the data is particularly noisy, a second threshold excluding values that are greater/smaller than e.g. 1 standard deviation from the rest of the window should be helpful in removing noise. This should get rid of peaks arising from shifting the probe or applying more gel.
    5. Find the min and max values. I typically run a for loop for this, tagging all the min and max values (creating vectors for tags, values and time). While it may not be the most elegant method, I prefer to find the values by creating a window moving through the datafile, using an if statement to tag only values that are higher (max) or lower (min) than the other values in the window. I tend to use +/- 40 data points if the acquisition rate is about 50 Hz. I visualise the min/max values superimposed on the raw data to make sure I have correctly identified the values. This is where it is important not to have smoothed too much, because that flattens the wave and you might end up with several similar values and shifted/non-existing peaks/troughs.
    6. Identify the waveforms. I run a for loop to identify full waveforms, using the tags created in the step above. If the min values are tagged ‘0’ and the max values ‘1’, then a waveform would be identified by a 0-1-0 sequence (with an appropriate if statement excluding datapoints that are too far apart in terms of time to avoid any misclassifications across several waveforms).
    7. Calculate what you are looking for. With these vectors, means can be calculated (or systolic values, diastolic values, R-R intervals and so on). Means can be calculated as follows: Mean CBFV = (PSV + [EDV × 2])/3, where CBFV is cerebral blood flow velocity, PSV is peak systolic velocity (the max value of the waveform), and EDV is end-diastolic velocity (the second min value in the waveform). These are highlighted in the figure above. Just for reference, average adult mean flow velocity is 46-86 cm/sec.
  3. Rinse and repeat for all participants


Further reading:

Advances in Transcranial Doppler US: Imaging Ahead Jonathan D. Kirsch, et. al. RadioGraphics 2013; 33:E1–E14. Link.

Purkayastha, S. & Sorond, F. (2012) Transcranial Doppler Ultrasound: Technique and Application. Semin Neurol. 32(4): 411–420. Link.


Measuring brain blood flow

Brain blood flow can be measured in many ways, but in this post I will focus on transcranial Doppler ultrasound (TCD). This method is used to measure the speed and direction of blood flow through a single blood vessel in your head.

TCD is a great measure for brain blood flow: the method can measure blood flow beat-by-beat (we tend to say that it has excellent temporal resolution – it collects a lot of data points per unit time). It is also non-invasive, meaning it causes no harm to the body (although a lower power should be used for some probe placements). It is very good for measure changes in blood flow, for example due to certain stimuli, and can be used to measure how well blood flow responds to such stimuli.

So what is this signal? What happens when we use ultrasound is that an ultrasonic beam is sent from the probe and into the blood vessel, where it is reflected back from the moving red blood cells. The returned signal is slightly different from the first signal. If the blood cells are moving towards the probe, the returned signal has a higher frequency. If it is moving away from the probe, the returned signal has a lower frequency. How much of a faster or lower signal depends on how fast the blood cells move. We call this change in wavelength the Doppler shift or the Doppler effect, and it has the symbol f. It is this Doppler shift that is being used to calculate blood flow velocity.

Most of us have seen the Doppler shift in action. A common example is if an ambulance or fire engine passes us in the street with its sirens on. When it comes towards us, the source of the sound (the vehicle) is moving in the same direction as the sound of the siren, and the sound waves are therefore compacted (have a shorter wavelength), resulting in a higher frequency sound (a higher pitch). When it has passed us and moves away, the vehicle moves in the opposite direction to the sound of the siren (relative to us) and so the sound waves are stretched (have a longer wavelength), we get a lower frequency sound (a lower pitch).

(Not clear? Here is a Vimeo video illustrating the Doppler effect and the example above)

Ok, which blood vessel is best. There are plenty of blood vessels in the brain, but the middle cerebral artery (MCA) is great as it is a major artery and easy to locate. The MCA emerges from the carotid artery, which is the main artery going to the head. We have one carotid artery on each side of the neck (which you can feel when you take your pulse). At the top of the neck, the carotid artery divides in an internal and an external branch. The internal branch then divides into the MCA and the anterior cerebral artery (ACA). The MCA receives around 70% of the internal carotid artery blood flow, and we therefore often assume that blood flow through the MCA is representative of the total blood flow to one half of the brain.

How to measure the flow. When we image blood flow through this vessel using TCD, we put the ultrasound probe to a person’s temple, and choose an ultrasound beam depth at around 45-60 mm. The best depth depends on the anatomy of the person – there is a bit of variation in the population, so you may have to try different depths to get a good signal. The temple is a good place to measure blood flow, as it is where the internal cerebral artery splits into the ACA (blood flowing away from the probe) and the MCA (blood flowing towards the probe). This gives a very distinct waveform (see below) and means we can be reasonably sure we have placed the probe and the ultrasound beam correctly.

Typical TCD trace from the middle cerebral artery. 

It should be mentioned that when measuring blood flow through other vessels, we may choose a different placement of the probe. This can include putting the probe over the (closed) eye (transorbital), at the base of the neck (suboccipital) or on the neck below the ear (submandibular). We call these placements ‘windows’ (i.e. the transtemporal window is the placement of the probe on the temple). From the transtemporal window, we can see flow through the MCA, ACA, the terminal internal cerebral artery (ICA) and the posterior cerebral artery (PCA).

What is the physiology behind the signal? The waveform shows changes in blood flow due to systolic and diastolic phases of the heart. Referring to the image above, each one of the ‘waves’ is a heartbeat. The start of the systole, peak systole, dicrotic notch and the end of the diastole are marked on the figure. Systole is when the heart contracts and pumps the blood out, and diastole is when the heart relaxes and refills with blood. The dicrotic notch is a short-term change in aortic pressure that stems from the closure of the aortic valve. This valve is between the left ventricle of the heart and the aorta, and blood passes this valve as it enters the body circulation. 

So it starts with the beginning of the systole (when the heart starts to contract). The increased pressure in the contracting heart pushes blood out of the left ventricle and into the circulation through the aorta. Some of this blood goes to the brain and is what we measure with TCD. Blood flow increases through the blood vessels as it is being pumped out of the heart. At peak systole, the blood flow for that particular heartbeat is at its maximum. Then, as the heart begins to relax, the pressure in the ventricle drops and it begins to refill with blood (coming from the lungs). The aortic valve closes and stops blood from flowing the wrong way back into the heart. This closure of the valve can be seen as a small change in blood flow (the dicrotic notch). We reach the lowest flow just as the heart is filled up and ready to contract again. TCD gives us a measure of the blood flow at all these stages of the cardiac cycle.


Ultrasound of the heart. All four heart chambers are visible, as are the valves (flapping open and closed) between the atrial and ventrical chambers.

The temporal resolution of TCD is excellent. As can be seen in the gif above, ultrasound lets us see things in more or less real time (in this case a beating heart), and this makes it a useful technique for measuring rapid changes in brain blood flow.

I will be posting a short tutorial on how to analyse TCD data soon, and if anyone has any questions on the method before then, I would be happy to address them in the comments.

The physiology of the bends

Gases and physiology go hand in hand. We breathe oxygen and exhale carbon dioxide. Several molecules that we usually come across in gas form act as signal molecules, for example carbon monoxide and nitric oxide. But sometimes, things go wrong when gases meet physiology. In this blog post, I will talk about one such situation: decompression sickness, also called the bends.


What, exactly, is the bends? The bends happens when a person has been under pressure and then that pressure is removed quickly, causing the gases that are normally dissolved in the body fluids to form bubbles. A typical situation is where someone has been diving deep underwater and then resurface too quickly*.

What causes it? The first thing we need to know is which gases are problematic, namely inert gases. There are inert gases in the body. An inert gas is a gas that, under the given conditions, does nothing – i.e. it doesn’t undergo chemical reactions. The inert gases in the body mostly sits there and does not take part in metabolism. Oxygen is not an inert gas in the body – it contributes to several physiological processes including respiration. Nitrogen, however, is an inert gas in the body – it does not contribute. It is the main inert gas in the air, and the most common culprit of the bends.

Nitrogen and other inert gases are typically dissolved in the bodily fluids. This process requires pressure. Henry’s Law describes the relationship between pressure and a gas dissolved in liquid. This law states that when pressure on a liquid is decreased, the amount of gas dissolved in the liquid will decrease to the same extent, and when pressure is increased, the amount of gas dissolved will increase. This means that if the human body is put under high pressure (e.g. when diving), the inert gases will be dissolved in the bodily fluids. This is not a problem. The gases are inert, doing nothing, so we can usually handle the extra amount. The problem occurs when the pressure is reduced, and it is reduced fast. If a diver is moving slowly from the high-pressure underwater environment to the surface, the gases being released may escape bit by bit as the person breathes out, which is fine. But if it happens fast, the lungs may not be able to keep up, and that causes trouble. The inert gases become un-dissolved in places where they really should not be: we get bubbles trapped in the body.

Bubbles are bad news. They disrupt blood flow and can cause damage. Bubbles can form in (or be transported to) any part of the body (although some places such as large joints are more at risk). This means that we can get a great range of symptoms, depending on where the bubbles form. The worst outcome is death, which can happen if the bubbles for example interrupt spinal cord function. Less severe outcomes, although still severely unpleasant, are seizures, paralysis, dizziness, pain, visual disturbances, breathlessness, nausea, incontinence and more. In short, we do not want bubbles of inert gas (usually nitrogen) interfering with our organ function.

The deeper and longer the dive, the more dangerous it can be. We know that pressure increases as you descend. At 10 meters, the pressure has increased from 1 atm (sea level) to 2 atm, and the same level of pressure will therefore be needed for the diving suit to remain inflated. For reference, one atmosphere is the pressure that the air exerts at sea level, and the pressure tends to increase with 1 atm for each 10 m increase in depth. In other words, each 10 m below the surface adds to the body of water pressing down on the diver. The further down, the more weight, and this weight is in addition to the weight of the air (1 atm at sea level). 

At 1 atm (sea level), the oxygen content is approximately 21%, and nitrogen content is approximately 78%. The oxygen pressure is therefore 0.21 * 1 atm =0.21 atm, and the nitrogen pressure is 0.78 * 1 atm = 0.78 atm. There are other pressure units that this can be measured in, such as mmHg or Torr (1 atm = 760 mmHg = 760 Torr) and kPa (1 atm = 101.325 kPa), but we will stick to atm for now.

If a diver is given normal air (with the percentage composition described above), at 10 meters the oxygen pressure will be 0.21 * 2 atm = 0.42 atm, and nitrogen pressure will be 0.78 * 2 atm = 1.56 atm. At this pressure, the blood will start to dissolve the nitrogen. This means that the fluids in the body will contain increasing amounts of nitrogen until an equilibrium is reached. How much nitrogen ends up where in the body depends on the tissue composition. Fat, for example, can dissolve a lot of nitrogen, and will take a bulk share.

Dissolving nitrogen is a slow process, as is releasing dissolved nitrogen. The longer the dive, the more nitrogen is dissolved, and the longer it takes to release it afterwards.

So why nitrogen? Why not oxygen, or carbon dioxide? As a rule of thumb, for the bends to happen, the diver has to experience the gas in question at a pressure of at least 2 atm. This excludes both carbon dioxide and oxygen. Even at depth, carbon dioxide pressure simply does not increase much above its normal level (about 40 mmHg, or around 0.05 atm), and oxygen is used up by the tissues so it does not rise as much either*. Unless the diver uses a gas mixture with a different inert gas (e.g. helium), nitrogen will be the villain of the piece.

There are advantages to using other gases. Helium, for example, is less soluble than nitrogen. This means it dissolves faster into the body fluids, but it is also removed faster, reducing the risk of developing the bends. For long dives, this is good, but for short ones (where nitrogen simply would not have time to dissolve much), it’s not so good. One may ask why use an inert gas at all, if they are so problematic, and the simple answer is that pure oxygen under pressure is toxic**.

What about diving animals? Plenty of animals have an aquatic mode of life, and the list includes a variety of species from the odd rodent, ungulate and marsupial to penguins, seals and whales. Many of these dive to incredible depth, and they ascend and descend quickly.

diving depths
Diving depths of marine mammals, not including the deepest diving whale (Cuvier’s Beaked Whale, 2992 m) or birds (deepest diving is Emperor Penguin at 565 m, and deepest diving flying bird is Brünnich’s guillemot at 210 m). The personal record of this particular mammal is 3 m, which was a proud, yet unpleasant, moment. *Record by Soviet submarine K-278 in 1984, data for current submarines classified. Image not to scale: the submarine was 117 meters long, making it around 4 times the size of a blue whale.

At depth, the lungs of diving animals are almost entirely collapsed and blood flow to the lungs limited. Some diving animals even have collapsible rib cages that forces the air out when they dive. This means that no nitrogen can enter the blood during the dive. Many diving animals have quite small lungs, possibly because of this (and because large lungs work as airbags, making diving difficult). The diving animals do not store oxygen in their lungs for their dives, but rather in their blood, and they are very particular in how they use what little oxygen they have.

Oxygen storage. Diving animals usually have much more blood relative to their body size than non-diving animals, and the blood itself has a greater oxygen capacity (almost 40 ml oxygen per 100 ml blood, twice that of humans). The increased oxygen carrying capacity is due to a greater amount of red blood cells (which contain haemoglobin, the molecule that binds oxygen), and a much higher level of haemoglobin in the muscles (called myoglobin). The sperm whale, for example, has a myoglobin content almost 10 times that of humans, allowing them to store a vast amount of oxygen. Theoretically, the high level should cause the myoglobin to clump together, causing disease, but diving animals’ myoglobin is positively charged and stays nicely separated even when the muscles are packed with it.

Oxygen use. This impressive store of oxygen, however, is not enough. In addition, most diving animals have a lowered heart rate during dives, and blood flow to most of their internal organs is lowered (the exception being the brain). Even the heart gets less blood (but then again, the heart rate is lowered, so less blood is needed). Muscles also get reduced blood flow despite being needed to swim – instead, they rely on anaerobic (non-oxygen) metabolism after the myoglobin oxygen storage is depleted. Many diving animals also reduce muscle use. For example, Weddell and Elephant seals, bottlenose dolphins and blue whales have all been shown not to use their muscles when descending: sinking rather than swimming. And diving animals also reduce their metabolic rate, meaning that they use less energy. This is made easier by many diving animals being large, because oxygen consumption relative to body size is smaller the larger the body size. In other words: a mouse uses much more oxygen per gram of its body (1.65 litre oxygen per kilo per hour) than does an elephant (0.07 litre oxygen per kilo per hour).

In short: diving animals avoid the bends through minimizing the amount of nitrogen entering their blood when they dive***. To stay submerged, they are very good at storing oxygen in their blood (through higher blood volume and greater amounts of red blood cells) and muscles (through greater myoglobin levels), and use less oxygen (through shutting off blood flow to non-important organ systems, reducing their heart rate and metabolic rate, letting their muscles work anaerobically or not work at all).

*Interesting fact: The bends can also happen when ascending rapidly in for example an aircraft, with the danger arising as the pressure is quickly reduced to 0.5 atm. The underlying principles remain the same. A pressurised cabin will prevent this entirely (as with deep-diving submarines).

**Oxygen at pressures above 1 atm is toxic. When diving deeper than 60 meters, the pressure is so high that the gas mix needs to contain less oxygen than at sea level (hypoxic mixture) to avoid oxygen poisoning. There are multiple studies on oxygen toxicity, but the wikipedia article on the topic is quite extensive and accessible:

***This does not mean that they cannot get the bends. For example, stranded whales have shown signs of gas-bubble tissue damage (Gas-bubble lesions in stranded cetaceans: Nature 425, 575–576 (2003) It is possible that this has to do with the animals being stressed (and therefore either diving for too long or ascending or descending too quickly), or having too high blood flow (again due to stress or possibly also cold). While we do not know the exact reason why and how this happens, we do know that the use of sonar has been linked to whales getting the bends.

Magnet mistakes

This is just a short post on the many ways in which films and telly often get MRI wrong, and one thing that they tend to get right. Also, it is a good excuse to post a few interesting MRI videos.

1. The magnet is ALWAYS on. You don’t turn on an MRI. Nor do you turn it off. The machine uses a magnetic field which is always on as long as the machine is operational (whether scans are taken or not). This field can be pretty strong, and will trap ferrous metal objects in the bore, even if there is no scan running. Actually turning off the magnetic field (quenching it) is only done if the scanner is being decommissioned or in life-threatening situations, as puts it out of action for at least a week and cost a lot to restart (>£20,000), even if it has not been damaged by the quench.

2. Pressing the red button is usually bad news. There are two types of big red button. One is an emergency stop which does not turn off the magnet per se, but turns off power to consoles, lights (not emergency lights) and so on. The other quenches the magnet (rarely done, see the above point) and it looks like this:

The video above is a magnet being quenched at 1% helium capacity, which is to say that it is not nearly as big an event as it could be.

3. The magnet is as strong as it is. Variable field strength is not a thing. You cannot turn up the field, and you cannot turn it down.

4. Scans typically take time to acquire and interpret. If it was possible to put a person in and read out the data within seconds, that would be great. However, a good structural scan takes minutes, and a functional scan often even longer plus it requires additional processing steps that can take hours or days. There are also setup scans typically run before the main scan, and these also take time. It is unfortunately not plug and play. Caveat: real-time MRI is a thing, but it is mostly used for cardiac imaging and rare cases of functional MRI neurofeedback sequences. Typically, these are not the ones portrayed in the offending films.

5. Colours? Scans usually don’t come automatically in pretty colours. Structural scans are in more-or-less grainy black and white, and while functional scans can be presented in colour, this requires a lot of processing after the scan has been completed (see above). And what you get out is typically a statistical map of the signal, not the actual measurements themselves. In short: colours usually means lots of stats, stats usually means lots of time.

6. It’s noisy! And not simply high-tech whirr either: it can sound like a construction site in there.

7. There’s often a coil. At least with neuroimaging, where the coil is a cage-like structure placed around the head. This mistake can be forgiven if the scan in question would use a body coil, which can be pretty invisible and look like a part of the table.

There are plenty of films getting MRI wrong, for example Die Another Day (although the MRI bloopers are arguably not the biggest problem with that film) and Terminator Genisys that manage to get not only the turn it on and off again wrong, but also the variable field strength, plus introducing a conveniently appalling lack of shielding (meaning that the fringe field (the magnetic field that surrounds the magnet) is so large it reaches the control room). Go watch the Terminator clip over on youtube (from 1 min) to see for yourself. I feel for that poor MRI scanner.

+1. It’s strong. This is the one most get right. The magnetic field is strong – it will pull ferrous items into the bore of the magnet, wreck the item (and sometimes itself), and you’re probably not strong enough to stop it.

That being said, there are plenty of films where metal props are far too close to the magnet to be believable, and the ‘patients’ are allowed to keep on items of clothing such as underwired bras and watches, and even bring handbags or other personal items into the scan room. Even if such items are not ferrous, they can still cause image artefacts, and are typically removed. I’ve been told there is a Grey’s Anatomy episode where an MRI was requested for a patient with a fork stuck in the neck – the less said about that the better.

Reblog: undergraduate co-authors

There is a short blog post over at Dynamic Ecology on undergraduate co-authorship. Fortunately, there seems to be agreement that undergraduates can be authors, and that authorship should be determined based on contribution rather than rank (which, if reversed, would be a rather, well, rank system). What I find the most interesting, however, is the comments section, where strategies for fostering authorship opportunities for undergraduates is discussed. How do we aid the process and make sure we take steps to include rather than exclude? It’s a good (and short) read:

Our next question in our ask us anything series comes from Liz: For undergraduate researchers, what is enough of a contribution to merit co-authorship versus acknowledgements? – via Dynamic Ecology

Great science writing

Great science writing is hard to find. I have assembled a list of some of the best popular science books I’ve come across in the past few years. Some are old, some new-ish, but all of them were good. It is a mixed bunch with respect to topic, but I am a great believer in reading outside of your field. In no particular order:

Spillover: Animal Infections and the Next Human Pandemic (David Quammen)
This is an excellent read. Each chapter is dedicated to a zoonotic disease, from AIDS to Nipah, with titles such as Thirteen Gorillas and Dinner at the Rat Farm. The book was released before the 2013-2016 Ebola outbreak, so it may also be worth picking up Quammen’s Ebola: The Natural and Human History of a Deadly Virus which supplements his chapter on Ebola.

The Emperor of all Maladies: A Biography of Cancer (Siddhartha Mukherjee)
This is the story of cancer, from the first mentions of the disease (over 4000 years ago) to today. It covers the development of treatment options, the successes and the failures. It is grisly and bleak in (many) places, perhaps unsurprising given the subject, but highly acclaimed, beautifully written and very accessible. While the conclusion of the book unfortunately is not a promise of a cancer-free world, it nevertheless comes across as cautiously optimistic.

The Immortal Life of Henrietta Lacks (Rebecca Skloot)
Skloot’s book is about Henrietta Lacks, whose cancer gave rise to the the HeLa cancer cell line used widely in research today. It is as much about the mistreatment of Lacks and her family related to the use of the cell line as it is about the scientific breakthroughs (and problems) that the cell line made possible. An important reminder that ethics in medical research is obligatory and necessary.

Why we sleep (Matthew Walker)
“Scientists have discovered a revolutionary new treatment that makes you live longer. It enhances your memory, makes you more attractive. It keeps you slim and lowers food cravings. It protects you from cancer and dementia. It wards off colds and flu. It lowers your risk of heart attacks and stroke, not to mention diabetes. You’ll even feel happier, less depressed, and less anxious. Are you interested?” Neuroscientist Matthew Walker’s book on sleep is an excellent, compelling read on an important topic.

Six degrees (Mark Lynas)
This book on global warming was released in 2008, and is bound to be a bit dated. It is a great read nevertheless, and summarises the danger that increasing temperatures pose. Each chapter is set out to explain the consequences of temperature increase, degree by degree (One degree, Two degrees, Three degrees and so on) until six degrees. As expected, the expected effects get increasingly bad as you read on, but the book is more a call to action than to apathy. It is a great, if distressing, read.

The Drugs Don’t Work: A Global Threat (Sally Davies, Jonathan Grant, Mike Catchpole)
Not to be confused with the Verve song, this book is about antibiotic resistance and the thin pipeline for antibacterials. It is a very short book, written by experts in the field including the Chief Medical Officer for England (Davies), and its message is that the loss of antibiotics means the end to modern medicine as we know it. Published in 2013, the book unfortunately remains relevant and serious today.

The World Without Us (Alan Weisman)
What would happen if humans suddenly disappeared? This is a thought experiment, speculating on the outcome of the sudden (and impossible) disappearance of all humans on earth. When would our buildings crumble? How would our pets fare? When would all traces of us be lost? It’s an interesting idea, well written and discussed, and oddly reassuring in its description of a world without humans continuing to grow and thrive.

The Oxford Book of Modern Science Writing (Richard Dawkins)
This is a collection of science writing, penned by scientists and reaching as far back as the early 1900s. It is a varied collection, both in terms of topic and style, but it is a great read for anyone wanting to work on their science communication, or just wanting to read excellent prose on random scientific topics.

Stiff: The Curious Lives of Human Cadavers (Mary Roach)
I admit to buying this book out of morbid fascination with the topic. Still, it is an interesting read. The book elegantly covers all aspects of human cadavers: their historical uses, their decomposition (or lack thereof), moral issues, how we dispose of them and how they can help us even in the modern age. It’s a great read if you are not squeamish.

Rabid: A Cultural History of the World’s Most Diabolical Virus (Bill Wasik, Monica Murphy)
This is a summary of rabies, what we know of it, its history and cultural significance. It outlines the early misconceptions, the discovery of the virus, and the attempts at treatment in easy-to-understand language. It also highlights the impact rabies has had on society, including folklore. A fascinating read.

A Short History of (Nearly) Everything (Bill Bryson).
I do not think any list of popular science books would be complete without this gem. It is a book which covers (nearly) everything in surprisingly easy-to-follow language. Think Cosmos: A Personal Journey or Cosmos: A Spacetime Odyssey in book format, with more detail. As overviews go, it is in my view quite unbeatable.

Gut: The Inside Story of Our Body’s Most Underrated Organ (Giulia Enders)

And finally, a recently-released book that I have not yet read (but want to): Inferior: How Science Got Women Wrong – and the New Research That’s Rewriting The Story by Angela Saini, reviewed very favourably here.


Following my list of resources for fresh neuroscientists, I figured I’d share something for those interested in exploring neuroscience but not quite ready to pick up a textbook, namely neurocomic.

Neurocomic is a Wellcome Trust supported project that aims to explain neuroscience ideas to a lay audience using comics. The brainchild of neuroscientists Hana Ros (UCL) and Matteo Farinella (who is also the artist), the story follows a man as he is trapped inside a brain and journeys to escape. The quest takes him through neuron forests, distinct brain regions and visual metaphors of common concepts in neuroscience and psychology, encountering various beasts and scientists along the way. Judging by its reviews, most people find it accessible and accurate, if a bit short (and short on women). Personally, I am particularly enthused by the medium.

The process and ideas behind the project are explained in the video below, and might be of interest to potential readers and to researchers considering tools for effective science communication.

Using illustrations to communicate science (or any information, really) can be powerful. I tend to draw quite a lot in my work, and find that visualising problems helps me work through them faster and see connections that might not be immediately apparent. It works great as a study method too (see this paper for example), helping students remember content better. As for communicating information, it is superb. I have a list of favourite science comics that have introduced me to new concepts more than once (I am particularly looking at you, xkcd, and your lovely comic explanation wiki). While three-panel strips rarely provide a full picture of a scientific concept, they can certainly offer a brief and memorable introduction. Similarly, neurocomic’s 150 pages are not enough to encompass the entire history and science of brain research, but it is a good place to start to get a flavour for some of the ideas in the field.

Neurocomic is on twitter @neurocomic

Resources for new neuroscientists

Analysis can be tricky at times, especially in neuroscience. Our business is one of maths and stats, and it can all be a bit complex, especially for those new to the field. Below is a short list of resources available to neuroscientists that may be useful when trying to make sense of the data:

1. The Q&A forum. There is a new resource in town for those new to neuroscience and stuck on some technical or analysis problem: It is a discussion forum where you can post your questions and (hopefully) get answers from the community. It has been open since December 2016, and from what I can tell, most questions posed on the forum has received at least one answer. It is not limited to any particular software library and it has a search function that can help you figure out if anyone has asked your question before. Whilst I have not used it myself (yet), it seems like a good resource.

2. The forum for (mostly) FSL. There is also FSL’s JISCmail site which works on the same principle as neurostars. As the name suggests, this is a discussion forum for FMRIB’s Software Library (FSL), but the topics span everything from the highly technical to basic model design, which means it could be useful for those favouring other software. As a FSL user, I’ve found this site immensely helpful whenever I’ve been stuck, and I can vouch for the quality of the answers on the forum. Records go back to 2001, making this a huge resource where you can be almost guaranteed to find the answer you need. However, make sure that the Q&A is not too old, as analysis tools do evolve.

3. The introduction to analysis. Finally, if you don’t really have a specific question, but rather would like an introduction to MRI analysis, I can thoroughly recommend Jeanette Mumford’s youtube channel, mumfordbrainstats. It’s clear and understandable, and it will make sure you understand what goes on under the bonnet of your analysis. She also created this nifty power calculator for fMRI – fmripower – which can give you a better indicator of power in your next experiment beyond “other people have used X subjects, so we’re going to go with that”. While it is not infallible in that it can only calculate power for a limited number of statistical tests, it is still a very useful starting point.

I’m willing to bet that at least one of the above will be able to help should you find that your analysis suddenly doesn’t work the way you expected it to.


How small is too small? MRI of tiny structures

MRI is great for imaging tissues and organs, as it does not involve any invasive procedures (such as drugs, radiation or even needles/scalpels). It allows us to quickly and safely get a good idea of what goes on under the skin. However, just as with a photo, it can become pixelated and useless, especially if you are looking at small structures. Imagine taking a holiday snapshot of a distant landmark, say, the Eiffel tower, but when you zoom in, it becomes hard to see what is tower and what is sky. Up close, the image is hard to interpret, and determining the exact size of the structure in the image may become impossible. In a pixelated image of the Eiffel tower, one pixel may contain both tower and sky, and you can’t tell where the exact line between the two goes. This is also the case with MRI.


The MRI image is divided into voxels (same as pixels, just three-dimensional – think of it as a set of tiny cubes making up the image). The quality of your image depends on the number of voxels and the signal from these. Too few voxels, not enough information. It would be like having just a few pixels covering the spire of the Eiffel tower – it won’t look good. One voxel covering a big chunk of the brain of a human participant won’t be very useful as there simply is not enough resolution to determine what’s what.

You can also have enough voxels but too little signal. Too little signal means not enough information. It would be like taking the photo without any light. In photography, light creates the image. If you have plenty of light, you can typically get more detail from each part of your image, and so your resolution can be better. Not enough signal means you need to keep your shutter open for longer, to let more light in. You can do that, but you probably need a tripod, and any movement (birds, people, wind, clouds, a bus rumbling past) will affect your picture and make it blurry. Same with MRI. You can increase signal by increasing scan time, but that’s not always possible and means you have to keep people still and in the scanner for longer.


Typically, the stronger your field strength, the more signal (light) you can get from your voxels. More signal means you can reduce voxel size as you will be getting more detail from each part of your scan. This typically results in better resolution. Or, you can get the same resolution, just faster (i.e. keep the shutter open for a shorter period). This can be useful if what you’re wanting to scan moves. For some tissues, it is invaluable to be able to get quick images. Imagine getting an image of a beating heart, for example. You need a quick scan sequence, and you typically need to do it several times over to get a nice, detailed image. That is the equivalent of taking lots of quick shots of the Eiffel tower (swaying in the wind, perhaps), and piecing them together to get all the fine details. This process of getting many snapshots of the same thing and taking the average to get a good image is very common in MRI, and it helps with a third issue: noise.

Everything you see in an MRI image is either signal, or it is noise. We call the relationship between signal and noise a signal-to-noise ratio (SNR), and we want it to be as high as possible (more signal, less noise). Noise in MRI images is usually caused by particles with an electrical charge moving around slightly in the human body, or by electrical resistance in the MRI machine itself. Together, these cause variations in signal intensity. Again, in our photo of the Eiffel tower, this is much the same: small visual distortions that gives it a grainy quality.


When we have smaller voxels, we typically get less signal and more noise per voxel (a low SNR). The way around it is to increase the number of averages that we run – i.e. take more snapshots, get a better image. Unfortunately, this takes time. Usually, then, we end up with a compromise of how good a resolution we want and how long we want the scanning to take, and this is greatly dependent on the kind of signal we can get.

The good news is that signal is, as mentioned earlier, grossly dependent on the field strength of our magnet. And we have some strong magnets available to us. For a human scan with a field strength of 3 Tesla (T), a resolution of 1mm x 1mm is easily obtained in about 5 minutes. That is fine for a structural scan of, say, a human brain. But what if you want to scan something smaller? Humans move, no matter how hard they try not to, even if it is only to draw breath. Higher resolutions means you’ll pick up on these small movements. A tiny motion can shift a small voxel completely out of place, while a bigger voxel would be less affected. In short, smaller makes things more difficult.

So, how small is too small?

For higher-field machines in humans, such as 7T, you can go small. 0.5mm x 0.5mm for 7T is easily doable, and some post mortem scans has gone down to 0.14mm x 0.14mm [1]. For our 9.4T scanner, we have scanned with resolutions of 0.03mm x 0.03mm (post mortem), and resolutions at 0.06mm x 0.06mm should be possible for scans that are not post mortem. That means with 9.4T, we can in theory image structures as small as 0.25mm across with some clarity (remember, you need to have at least a few voxels of the thing you want to image to properly see the edges and detail of it). Our scanner is too small for a whole human, but there exists 9.4T MRI machines for humans. And an 11.75T magnet is underway (see the BBC news story here), which will be able to get resolutions down to 0.1mm (or possibly more) in humans. So at present, anything less than 0.25mm is probably too small for MRI.

Smaller than that, and we have to use other methods. Microbiologists and their microscopes are likely to laugh in the face of 0.25mm. Below is a picture with a 9.4T image and histology image of testicular tissue [2], showing how much detail both techniques afford. Histology is obviously better. Sadly, however, it still requires scalpels, and MRI does not.


[1] Stucht D, Danishad KA, Schulze P, Godenschweger F, Zaitsev M, Speck O. Highest Resolution In Vivo Human Brain MRI Using Prospective Motion Correction. PLoS ONE. 2015;10(7):e0133921. Link:
[2] Herigstad MGranados-Aparici SPacey APaley MReynolds S. 

The wiring of DIN plugs

I get to do all sorts of practical stuff at work, some of which has nothing to do with physiology or imaging. One such thing is producing custom-made cables and plugs. Since I’ve had the pleasure of doing a fair few of these recently, I thought I’d put up a short how-to blog post on wiring up a DIN connector.

DIN stands for Deutsches Institut fur Normung, which translates to the German Institute for Standardisation. A DIN connector is, in short, a standardized connector, which come in a similar size. You may have seen them as they are often used for analog audio. The male DIN plug is typically 13.2 mm in diameter, and it often has a notch at the bottom to make sure the plug goes in the right way. Male plugs have a set of round pins, 1.45mm in diameter, that are equally spaced within the plug. The different types of DIN plugs have different numbers and configurations of pins. Below is an overview of some typical pin configurations.


There are, of course, variations over these themes, as well as specialized plugs with more than 10 pins. Pins on male connectors are numbered. The numbering* goes from right to left, viewed from the outside of the connector with the pins upward and facing the viewer. The female counterparts are the inverse of the male plugs, and their numbering is from left to right. Usually, only corresponding male-female pairs work together, but you may be able to fit a 3-pin plug with a 5-pin 180 degree plug.

*EDIT: to clarify, the numbering of the pins are not in order, as pointed out in the comments (thank you!). For the plugs, a 5-pin plug would be numbered (from right to left for the male, and from left to right for the female): 1–4–2–5–3.

So how to attach a DIN plug to a cable? 

  1. Take the cable and snip off a few centimetres of plastic and remove padding. Snip off the insulation (about 1 cm) of the individual internal cables (cores). Slide the DIN metal sheet over the wire.
  2. Make sure that each core fits neatly into holes of DIN plug. This might mean cutting some of the wires. Move the unisolated wires (the ground) to one side.
  3.  Solder the core wires together and test that they still fit the holes in the plug.
  4. (Now comes the fiddly part). Add solder tin (a small amount is best) to the DIN plug holes, heat with the solder and push the tinned cores into holes. Remove solder iron and let the tin solidify (a few seconds only). Test that the solder holds by pulling firmly on the plug and cores. Attach the metal clamp around the cores (see second of figures below).
  5. Wrap the ground wires around the base of the metal clamp. Make sure the ground does NOT touch the cores. Test that there is no connection between wires and between wires and ground using a multimeter. Slide the metal sheet over the structure and plastic, and screw on the release catch where this overlays hole on metal clamp.