Analysing brain blood flow

In a previous post I described how to use ultrasound to measure brain blood flow. So, the data has been gathered. To recap, this is what it looks like:

tcd

Now what? How do we analyse this? Some ultrasound machines have their own analysis software that can be used to determine whether the blood flow is normal. Usually, it requires training and experience to interpret ultrasound results for diagnostic or healthcare purposes, even with custom software. For research purposes, however, things are a bit different. The custom software may not always be what we are looking for, and diagnosis may not always be the end-game. In my research, I analyse beat-by-beat blood flow velocity in healthy participants, looking how this changes over time. I have used custom Matlab scripts to analyse these, using quite simple steps:

  1. Export the data in .dat or .txt format. This will depend on the software, and I have found it easiest to work with machines that allow direct (during data collection) connection with a computer, feeding into Spike, LabChart or similar programs, as I know how to export data from such software without any fuss.
  2. Read the data file in MATLAB (or similar), then follow these processing steps
    1. Visualise the data. I always look at my data before I start working on it. This helps figuring out the quality, and making sure I’ve measured what I think I have measured.
    2. Smooth the data. No data is perfect, and TCD data can be quite noisy for a number of reasons. While smoothing helps getting rid of some of these issues, I prefer to smooth as little as possible.
    3. Remove any gaps. There may be gaps in the data file where the probe had to be shifted for gel application or similar. These can be removed manually or you can use a threshold (lower and higher, outside of the physiological range). I prefer to automate as much as possible, both because I don’t want the hassle and I want to avoid the potential bias that comes with manual changes. I tend to visualise the datafile whenever using thresholds, to make sure the data looks fine, and I use the same threshold for all subjects.
    4. Remove extra noise? If the data is particularly noisy, a second threshold excluding values that are greater/smaller than e.g. 1 standard deviation from the rest of the window should be helpful in removing noise. This should get rid of peaks arising from shifting the probe or applying more gel.
    5. Find the min and max values. I typically run a for loop for this, tagging all the min and max values (creating vectors for tags, values and time). While it may not be the most elegant method, I prefer to find the values by creating a window moving through the datafile, using an if statement to tag only values that are higher (max) or lower (min) than the other values in the window. I tend to use +/- 40 data points if the acquisition rate is about 50 Hz. I visualise the min/max values superimposed on the raw data to make sure I have correctly identified the values. This is where it is important not to have smoothed too much, because that flattens the wave and you might end up with several similar values and shifted/non-existing peaks/troughs.
    6. Identify the waveforms. I run a for loop to identify full waveforms, using the tags created in the step above. If the min values are tagged ‘0’ and the max values ‘1’, then a waveform would be identified by a 0-1-0 sequence (with an appropriate if statement excluding datapoints that are too far apart in terms of time to avoid any misclassifications across several waveforms).
    7. Calculate what you are looking for. With these vectors, means can be calculated (or systolic values, diastolic values, R-R intervals and so on). Means can be calculated as follows: Mean CBFV = (PSV + [EDV × 2])/3, where CBFV is cerebral blood flow velocity, PSV is peak systolic velocity (the max value of the waveform), and EDV is end-diastolic velocity (the second min value in the waveform). These are highlighted in the figure above. Just for reference, average adult mean flow velocity is 46-86 cm/sec.
  3. Rinse and repeat for all participants

 

Further reading:

Advances in Transcranial Doppler US: Imaging Ahead Jonathan D. Kirsch, et. al. RadioGraphics 2013; 33:E1–E14. Link.

Purkayastha, S. & Sorond, F. (2012) Transcranial Doppler Ultrasound: Technique and Application. Semin Neurol. 32(4): 411–420. Link.

161px-Matlab_Logo

Magnet mistakes

This is just a short post on the many ways in which films and telly often get MRI wrong, and one thing that they tend to get right. Also, it is a good excuse to post a few interesting MRI videos.

1. The magnet is ALWAYS on. You don’t turn on an MRI. Nor do you turn it off. The machine uses a magnetic field which is always on as long as the machine is operational (whether scans are taken or not). This field can be pretty strong, and will trap ferrous metal objects in the bore, even if there is no scan running. Actually turning off the magnetic field (quenching it) is only done if the scanner is being decommissioned or in life-threatening situations, as puts it out of action for at least a week and cost a lot to restart (>£20,000), even if it has not been damaged by the quench.

2. Pressing the red button is usually bad news. There are two types of big red button. One is an emergency stop which does not turn off the magnet per se, but turns off power to consoles, lights (not emergency lights) and so on. The other quenches the magnet (rarely done, see the above point) and it looks like this:

The video above is a magnet being quenched at 1% helium capacity, which is to say that it is not nearly as big an event as it could be.

3. The magnet is as strong as it is. Variable field strength is not a thing. You cannot turn up the field, and you cannot turn it down.

4. Scans typically take time to acquire and interpret. If it was possible to put a person in and read out the data within seconds, that would be great. However, a good structural scan takes minutes, and a functional scan often even longer plus it requires additional processing steps that can take hours or days. There are also setup scans typically run before the main scan, and these also take time. It is unfortunately not plug and play. Caveat: real-time MRI is a thing, but it is mostly used for cardiac imaging and rare cases of functional MRI neurofeedback sequences. Typically, these are not the ones portrayed in the offending films.

5. Colours? Scans usually don’t come automatically in pretty colours. Structural scans are in more-or-less grainy black and white, and while functional scans can be presented in colour, this requires a lot of processing after the scan has been completed (see above). And what you get out is typically a statistical map of the signal, not the actual measurements themselves. In short: colours usually means lots of stats, stats usually means lots of time.

6. It’s noisy! And not simply high-tech whirr either: it can sound like a construction site in there.

7. There’s often a coil. At least with neuroimaging, where the coil is a cage-like structure placed around the head. This mistake can be forgiven if the scan in question would use a body coil, which can be pretty invisible and look like a part of the table.

There are plenty of films getting MRI wrong, for example Die Another Day (although the MRI bloopers are arguably not the biggest problem with that film) and Terminator Genisys that manage to get not only the turn it on and off again wrong, but also the variable field strength, plus introducing a conveniently appalling lack of shielding (meaning that the fringe field (the magnetic field that surrounds the magnet) is so large it reaches the control room). Go watch the Terminator clip over on youtube (from 1 min) to see for yourself. I feel for that poor MRI scanner.

+1. It’s strong. This is the one most get right. The magnetic field is strong – it will pull ferrous items into the bore of the magnet, wreck the item (and sometimes itself), and you’re probably not strong enough to stop it.

That being said, there are plenty of films where metal props are far too close to the magnet to be believable, and the ‘patients’ are allowed to keep on items of clothing such as underwired bras and watches, and even bring handbags or other personal items into the scan room. Even if such items are not ferrous, they can still cause image artefacts, and are typically removed. I’ve been told there is a Grey’s Anatomy episode where an MRI was requested for a patient with a fork stuck in the neck – the less said about that the better.

Reblog: undergraduate co-authors

There is a short blog post over at Dynamic Ecology on undergraduate co-authorship. Fortunately, there seems to be agreement that undergraduates can be authors, and that authorship should be determined based on contribution rather than rank (which, if reversed, would be a rather, well, rank system). What I find the most interesting, however, is the comments section, where strategies for fostering authorship opportunities for undergraduates is discussed. How do we aid the process and make sure we take steps to include rather than exclude? It’s a good (and short) read:

Our next question in our ask us anything series comes from Liz: For undergraduate researchers, what is enough of a contribution to merit co-authorship versus acknowledgements? – via Dynamic Ecology

Great science writing

Great science writing is hard to find. I have assembled a list of some of the best popular science books I’ve come across in the past few years. Some are old, some new-ish, but all of them were good. It is a mixed bunch with respect to topic, but I am a great believer in reading outside of your field. In no particular order:

Spillover: Animal Infections and the Next Human Pandemic (David Quammen)
This is an excellent read. Each chapter is dedicated to a zoonotic disease, from AIDS to Nipah, with titles such as Thirteen Gorillas and Dinner at the Rat Farm. The book was released before the 2013-2016 Ebola outbreak, so it may also be worth picking up Quammen’s Ebola: The Natural and Human History of a Deadly Virus which supplements his chapter on Ebola.

The Emperor of all Maladies: A Biography of Cancer (Siddhartha Mukherjee)
This is the story of cancer, from the first mentions of the disease (over 4000 years ago) to today. It covers the development of treatment options, the successes and the failures. It is grisly and bleak in (many) places, perhaps unsurprising given the subject, but highly acclaimed, beautifully written and very accessible. While the conclusion of the book unfortunately is not a promise of a cancer-free world, it nevertheless comes across as cautiously optimistic.

The Immortal Life of Henrietta Lacks (Rebecca Skloot)
Skloot’s book is about Henrietta Lacks, whose cancer gave rise to the the HeLa cancer cell line used widely in research today. It is as much about the mistreatment of Lacks and her family related to the use of the cell line as it is about the scientific breakthroughs (and problems) that the cell line made possible. An important reminder that ethics in medical research is obligatory and necessary.

Why we sleep (Matthew Walker)
“Scientists have discovered a revolutionary new treatment that makes you live longer. It enhances your memory, makes you more attractive. It keeps you slim and lowers food cravings. It protects you from cancer and dementia. It wards off colds and flu. It lowers your risk of heart attacks and stroke, not to mention diabetes. You’ll even feel happier, less depressed, and less anxious. Are you interested?” Neuroscientist Matthew Walker’s book on sleep is an excellent, compelling read on an important topic.

Six degrees (Mark Lynas)
This book on global warming was released in 2008, and is bound to be a bit dated. It is a great read nevertheless, and summarises the danger that increasing temperatures pose. Each chapter is set out to explain the consequences of temperature increase, degree by degree (One degree, Two degrees, Three degrees and so on) until six degrees. As expected, the expected effects get increasingly bad as you read on, but the book is more a call to action than to apathy. It is a great, if distressing, read.

The Drugs Don’t Work: A Global Threat (Sally Davies, Jonathan Grant, Mike Catchpole)
Not to be confused with the Verve song, this book is about antibiotic resistance and the thin pipeline for antibacterials. It is a very short book, written by experts in the field including the Chief Medical Officer for England (Davies), and its message is that the loss of antibiotics means the end to modern medicine as we know it. Published in 2013, the book unfortunately remains relevant and serious today.

The World Without Us (Alan Weisman)
What would happen if humans suddenly disappeared? This is a thought experiment, speculating on the outcome of the sudden (and impossible) disappearance of all humans on earth. When would our buildings crumble? How would our pets fare? When would all traces of us be lost? It’s an interesting idea, well written and discussed, and oddly reassuring in its description of a world without humans continuing to grow and thrive.

The Oxford Book of Modern Science Writing (Richard Dawkins)
This is a collection of science writing, penned by scientists and reaching as far back as the early 1900s. It is a varied collection, both in terms of topic and style, but it is a great read for anyone wanting to work on their science communication, or just wanting to read excellent prose on random scientific topics.

Stiff: The Curious Lives of Human Cadavers (Mary Roach)
I admit to buying this book out of morbid fascination with the topic. Still, it is an interesting read. The book elegantly covers all aspects of human cadavers: their historical uses, their decomposition (or lack thereof), moral issues, how we dispose of them and how they can help us even in the modern age. It’s a great read if you are not squeamish.

Rabid: A Cultural History of the World’s Most Diabolical Virus (Bill Wasik, Monica Murphy)
This is a summary of rabies, what we know of it, its history and cultural significance. It outlines the early misconceptions, the discovery of the virus, and the attempts at treatment in easy-to-understand language. It also highlights the impact rabies has had on society, including folklore. A fascinating read.

A Short History of (Nearly) Everything (Bill Bryson).
I do not think any list of popular science books would be complete without this gem. It is a book which covers (nearly) everything in surprisingly easy-to-follow language. Think Cosmos: A Personal Journey or Cosmos: A Spacetime Odyssey in book format, with more detail. As overviews go, it is in my view quite unbeatable.

Gut: The Inside Story of Our Body’s Most Underrated Organ (Giulia Enders)

And finally, a recently-released book that I have not yet read (but want to): Inferior: How Science Got Women Wrong – and the New Research That’s Rewriting The Story by Angela Saini, reviewed very favourably here.

Neurocomic

Following my list of resources for fresh neuroscientists, I figured I’d share something for those interested in exploring neuroscience but not quite ready to pick up a textbook, namely neurocomic.

Neurocomic is a Wellcome Trust supported project that aims to explain neuroscience ideas to a lay audience using comics. The brainchild of neuroscientists Hana Ros (UCL) and Matteo Farinella (who is also the artist), the story follows a man as he is trapped inside a brain and journeys to escape. The quest takes him through neuron forests, distinct brain regions and visual metaphors of common concepts in neuroscience and psychology, encountering various beasts and scientists along the way. Judging by its reviews, most people find it accessible and accurate, if a bit short (and short on women). Personally, I am particularly enthused by the medium.

The process and ideas behind the project are explained in the video below, and might be of interest to potential readers and to researchers considering tools for effective science communication.

Using illustrations to communicate science (or any information, really) can be powerful. I tend to draw quite a lot in my work, and find that visualising problems helps me work through them faster and see connections that might not be immediately apparent. It works great as a study method too (see this paper for example), helping students remember content better. As for communicating information, it is superb. I have a list of favourite science comics that have introduced me to new concepts more than once (I am particularly looking at you, xkcd, and your lovely comic explanation wiki). While three-panel strips rarely provide a full picture of a scientific concept, they can certainly offer a brief and memorable introduction. Similarly, neurocomic’s 150 pages are not enough to encompass the entire history and science of brain research, but it is a good place to start to get a flavour for some of the ideas in the field.

Neurocomic is on twitter @neurocomic

Resources for new neuroscientists

Analysis can be tricky at times, especially in neuroscience. Our business is one of maths and stats, and it can all be a bit complex, especially for those new to the field. Below is a short list of resources available to neuroscientists that may be useful when trying to make sense of the data:

1. The Q&A forum. There is a new resource in town for those new to neuroscience and stuck on some technical or analysis problem: https://neurostars.org/. It is a discussion forum where you can post your questions and (hopefully) get answers from the community. It has been open since December 2016, and from what I can tell, most questions posed on the forum has received at least one answer. It is not limited to any particular software library and it has a search function that can help you figure out if anyone has asked your question before. Whilst I have not used it myself (yet), it seems like a good resource.

2. The forum for (mostly) FSL. There is also FSL’s JISCmail site which works on the same principle as neurostars. As the name suggests, this is a discussion forum for FMRIB’s Software Library (FSL), but the topics span everything from the highly technical to basic model design, which means it could be useful for those favouring other software. As a FSL user, I’ve found this site immensely helpful whenever I’ve been stuck, and I can vouch for the quality of the answers on the forum. Records go back to 2001, making this a huge resource where you can be almost guaranteed to find the answer you need. However, make sure that the Q&A is not too old, as analysis tools do evolve.

3. The introduction to analysis. Finally, if you don’t really have a specific question, but rather would like an introduction to MRI analysis, I can thoroughly recommend Jeanette Mumford’s youtube channel, mumfordbrainstats. It’s clear and understandable, and it will make sure you understand what goes on under the bonnet of your analysis. She also created this nifty power calculator for fMRI – fmripower – which can give you a better indicator of power in your next experiment beyond “other people have used X subjects, so we’re going to go with that”. While it is not infallible in that it can only calculate power for a limited number of statistical tests, it is still a very useful starting point.

I’m willing to bet that at least one of the above will be able to help should you find that your analysis suddenly doesn’t work the way you expected it to.

loki_notworking

How small is too small? MRI of tiny structures

MRI is great for imaging tissues and organs, as it does not involve any invasive procedures (such as drugs, radiation or even needles/scalpels). It allows us to quickly and safely get a good idea of what goes on under the skin. However, just as with a photo, it can become pixelated and useless, especially if you are looking at small structures. Imagine taking a holiday snapshot of a distant landmark, say, the Eiffel tower, but when you zoom in, it becomes hard to see what is tower and what is sky. Up close, the image is hard to interpret, and determining the exact size of the structure in the image may become impossible. In a pixelated image of the Eiffel tower, one pixel may contain both tower and sky, and you can’t tell where the exact line between the two goes. This is also the case with MRI.

eiffel

The MRI image is divided into voxels (same as pixels, just three-dimensional – think of it as a set of tiny cubes making up the image). The quality of your image depends on the number of voxels and the signal from these. Too few voxels, not enough information. It would be like having just a few pixels covering the spire of the Eiffel tower – it won’t look good. One voxel covering a big chunk of the brain of a human participant won’t be very useful as there simply is not enough resolution to determine what’s what.

You can also have enough voxels but too little signal. Too little signal means not enough information. It would be like taking the photo without any light. In photography, light creates the image. If you have plenty of light, you can typically get more detail from each part of your image, and so your resolution can be better. Not enough signal means you need to keep your shutter open for longer, to let more light in. You can do that, but you probably need a tripod, and any movement (birds, people, wind, clouds, a bus rumbling past) will affect your picture and make it blurry. Same with MRI. You can increase signal by increasing scan time, but that’s not always possible and means you have to keep people still and in the scanner for longer.

eiffel2

Typically, the stronger your field strength, the more signal (light) you can get from your voxels. More signal means you can reduce voxel size as you will be getting more detail from each part of your scan. This typically results in better resolution. Or, you can get the same resolution, just faster (i.e. keep the shutter open for a shorter period). This can be useful if what you’re wanting to scan moves. For some tissues, it is invaluable to be able to get quick images. Imagine getting an image of a beating heart, for example. You need a quick scan sequence, and you typically need to do it several times over to get a nice, detailed image. That is the equivalent of taking lots of quick shots of the Eiffel tower (swaying in the wind, perhaps), and piecing them together to get all the fine details. This process of getting many snapshots of the same thing and taking the average to get a good image is very common in MRI, and it helps with a third issue: noise.

Everything you see in an MRI image is either signal, or it is noise. We call the relationship between signal and noise a signal-to-noise ratio (SNR), and we want it to be as high as possible (more signal, less noise). Noise in MRI images is usually caused by particles with an electrical charge moving around slightly in the human body, or by electrical resistance in the MRI machine itself. Together, these cause variations in signal intensity. Again, in our photo of the Eiffel tower, this is much the same: small visual distortions that gives it a grainy quality.

eiffel3

When we have smaller voxels, we typically get less signal and more noise per voxel (a low SNR). The way around it is to increase the number of averages that we run – i.e. take more snapshots, get a better image. Unfortunately, this takes time. Usually, then, we end up with a compromise of how good a resolution we want and how long we want the scanning to take, and this is greatly dependent on the kind of signal we can get.

The good news is that signal is, as mentioned earlier, grossly dependent on the field strength of our magnet. And we have some strong magnets available to us. For a human scan with a field strength of 3 Tesla (T), a resolution of 1mm x 1mm is easily obtained in about 5 minutes. That is fine for a structural scan of, say, a human brain. But what if you want to scan something smaller? Humans move, no matter how hard they try not to, even if it is only to draw breath. Higher resolutions means you’ll pick up on these small movements. A tiny motion can shift a small voxel completely out of place, while a bigger voxel would be less affected. In short, smaller makes things more difficult.

So, how small is too small?

For higher-field machines in humans, such as 7T, you can go small. 0.5mm x 0.5mm for 7T is easily doable, and some post mortem scans has gone down to 0.14mm x 0.14mm [1]. For our 9.4T scanner, we have scanned with resolutions of 0.03mm x 0.03mm (post mortem), and resolutions at 0.06mm x 0.06mm should be possible for scans that are not post mortem. That means with 9.4T, we can in theory image structures as small as 0.25mm across with some clarity (remember, you need to have at least a few voxels of the thing you want to image to properly see the edges and detail of it). Our scanner is too small for a whole human, but there exists 9.4T MRI machines for humans. And an 11.75T magnet is underway (see the BBC news story here), which will be able to get resolutions down to 0.1mm (or possibly more) in humans. So at present, anything less than 0.25mm is probably too small for MRI.

Smaller than that, and we have to use other methods. Microbiologists and their microscopes are likely to laugh in the face of 0.25mm. Below is a picture with a 9.4T image and histology image of testicular tissue [2], showing how much detail both techniques afford. Histology is obviously better. Sadly, however, it still requires scalpels, and MRI does not.

mrihistology

References:
[1] Stucht D, Danishad KA, Schulze P, Godenschweger F, Zaitsev M, Speck O. Highest Resolution In Vivo Human Brain MRI Using Prospective Motion Correction. PLoS ONE. 2015;10(7):e0133921. Link: https://www.ncbi.nlm.nih.gov/pmc/articles/PMC4520483/
[2] Herigstad MGranados-Aparici SPacey APaley MReynolds S. 

The wiring of DIN plugs

I get to do all sorts of practical stuff at work, some of which has nothing to do with physiology or imaging. One such thing is producing custom-made cables and plugs. Since I’ve had the pleasure of doing a fair few of these recently, I thought I’d put up a short how-to blog post on wiring up a DIN connector.

DIN stands for Deutsches Institut fur Normung, which translates to the German Institute for Standardisation. A DIN connector is, in short, a standardized connector, which come in a similar size. You may have seen them as they are often used for analog audio. The male DIN plug is typically 13.2 mm in diameter, and it often has a notch at the bottom to make sure the plug goes in the right way. Male plugs have a set of round pins, 1.45mm in diameter, that are equally spaced within the plug. The different types of DIN plugs have different numbers and configurations of pins. Below is an overview of some typical pin configurations.

dinplugs

There are, of course, variations over these themes, as well as specialized plugs with more than 10 pins. Pins on male connectors are numbered. The numbering* goes from right to left, viewed from the outside of the connector with the pins upward and facing the viewer. The female counterparts are the inverse of the male plugs, and their numbering is from left to right. Usually, only corresponding male-female pairs work together, but you may be able to fit a 3-pin plug with a 5-pin 180 degree plug.

*EDIT: to clarify, the numbering of the pins are not in order, as pointed out in the comments (thank you!). For the plugs, a 5-pin plug would be numbered (from right to left for the male, and from left to right for the female): 1–4–2–5–3.

So how to attach a DIN plug to a cable? 

  1. Take the cable and snip off a few centimetres of plastic and remove padding. Snip off the insulation (about 1 cm) of the individual internal cables (cores). Slide the DIN metal sheet over the wire.
  2. Make sure that each core fits neatly into holes of DIN plug. This might mean cutting some of the wires. Move the unisolated wires (the ground) to one side.
    1
  3.  Solder the core wires together and test that they still fit the holes in the plug.
  4. (Now comes the fiddly part). Add solder tin (a small amount is best) to the DIN plug holes, heat with the solder and push the tinned cores into holes. Remove solder iron and let the tin solidify (a few seconds only). Test that the solder holds by pulling firmly on the plug and cores. Attach the metal clamp around the cores (see second of figures below).
    3
  5. Wrap the ground wires around the base of the metal clamp. Make sure the ground does NOT touch the cores. Test that there is no connection between wires and between wires and ground using a multimeter. Slide the metal sheet over the structure and plastic, and screw on the release catch where this overlays hole on metal clamp.
    4

Done.

Reblog: The dreaded Q/A session

The Q/A session can be a daunting prospect, not only for the presenter but also for the audience members. Many of us have been in the position of having a question after a talk, but not quite daring to voice it. In a blog post over at The Female Scientist, immunologist Viki Male, suggests 7 steps to overcoming the fear of the Q/A and instead learn to use it to engage with the community and further your career goals. The post has some really good points – my favourite being “Let go of the idea that you have to ask “clever” questions. The most successful questions are usually the genuine ones.” It is worth a read.

How I Learned to Stop Worrying and Love Q&A Sessions!

The gender gap in asking questions is problematic because doing so is good for your career. Here are some tips on how to overcome your nerves and to get involved in Q&A sessions!

The evidence suggests that women ask fewer questions than men at conferences. It’s possible – indeed likely – that some of this effect can be accounted for by women being called on less often by the chair. But at least part of the problem is lacking the confidence to ask questions in front of an audience, and this seems to affect women more than men.

The gender gap in asking questions is problematic because doing so is good for your career. If you ask questions in departmental seminars, you will be noticed as an engaged scientist and good departmental citizen. This will be mentioned to your potential future employers and collaborators. When going up for internal awards, your engagement – or lack of it – will be noted and does influence your chance of success.

 

Reblog: Electrical shavers and splashed saline

One morning, your friend and you go to a café and get two identical coffees. Without telling you, a barista gives you a regular coffee and your friend gets a decaf. This means you are blinded to whether your drink contains caffeine or not.

I recently had the pleasure of being asked to give feedback on a blog post on blinding in clinical trials, and how this can be done if the trial involves surgery. It was a really interesting read, highlighting how creative you have to get to conceal real versus fake surgical interventions.

For anyone interested, the full post has just been released here.

For anyone very interested, the review paper that the blog post was based upon is here.