Tag Archives: fmri

Commercial fMRI neurobollocks – no, you cannot record your dreams (yet).

Cash cow? No.

Cash cow? I wish! But no.

With thanks to Micah Allen (@neuroconscience) for pointing this one out.

My day job is as an fMRI (functional magnetic resonance imaging) researcher, so you can imagine how tickled I was when I came across a brand-new neurobollocks-peddler who’s chosen to set up shop right on my patch!

Donald H. Marks is a New Jersey doctor who in 2013 set up a company called ‘Millennium Magnetic Technologies’. Readers old enough to remember Geocities sites from the mid-90s will probably derive some pleasant nostalgia from visiting the MMT website, which is refreshingly unencumbered by anything so prosaic as CSS. Anyway, MMT offer a range of services, under the umbrella of “disruptive patented specialty neuro imaging and testing services”. These include the “objective” documentation of pain, forensic interrogation using fMRI, and (most intriguingly) thought and dream recording.

This last one is something that’s expanded on at some length in a breathlessly uncritical article in the hallowed pages of International Business Times (no, me neither). According to the article:

“The recording and storing of thoughts, dreams and memories for future playback – either on a screen or through reliving them in the mind itself – is now being offered as a service by a US-based neurotechnology startup.

Millenium Magnetic Technologies (MMT) is the first company to commercialise the recording of resting state magnetic resonance imaging (MRI) scans, offering clients real-time stream of consciousness and dream recording.”

And he does this using his patented (of course) ‘cognitive engram technology’, and all for the low, low price of $2000 per session.

It’s clear from the article and the MMT website that he’s using some kind of MVPA (multi-voxel pattern analysis) technique with the fMRI data. This technique first came up about 10 years ago, and is based on machine learning algorithms. Briefly, an algorithm is trained, and ‘learns’ to distinguish differences in a set of fMRI data. The algorithm is then tested with a new set of data to see if what it learned can generalise. If the two sets of data contain the same features (e.g.the participant was exposed to the same stimulus in both scans) the algorithm will identify bits of the brain that contain a consistent response. The logic is that if a brain area consistently shows the same pattern of response to a stimulus, that area must be involved in representing some aspect of that stimulus. This family of techniques has turned out to be very useful in lots of ways, but one of the most interesting applications has been in so-called ‘brain-reading’ studies. In a sense, the decoding of the test data makes predictions about the mental state of the participant; it tries to predict what stimulus they were experiencing at the time of the scan. A relatively accessible introduction to these kinds of studies can be found here.

So, the good Dr Marks (who, by the way, has but a single paper using fMRI to his name on Pubmed) is using this technology to read people’s minds. However, needless to say, there are several issues with this. Firstly, to generate even a vaguely accurate solution, these algorithms generally need a great deal of data. The dream decoding study that MMT link to on their website (commentary, original paper) required participants to sleep in the MRI scanner in three-hour blocks, on between seven and ten occasions. Even after all that, the accuracy of the predictive decoding (distinguishing between two pairs of different stimuli, e.g. people vs. buildings) was only between 55 and 60%. Statistically significant, but not terribly impressive, given that the chance level was 50%.

My point here is not to denigrate this particular study (which is honestly, a pretty amazing achievement), it’s to make the point that this technology is not even close to being a practical commercial proposition. These methods are improving all the time, but they’re still a long way from being reliable, convenient, or robust enough to be a true sci-fi style general-purpose mind-reading technology.

This apparently doesn’t bother Dr Marks though. He’s charging people $2000 a session to have their thoughts ‘recorded’ in the vague hope that some kind of future technology will be able to play them back:

“The visual reconstruction is kind of crude right now but the data is definitely there and it will get better. It’s just a matter of refinement,” Marks says. “That information is stored – once you’ve recorded that information it’s there forever. In the future we’ll be able to reconstruct the data we have now much better.”

No. N. O. No. The data is absolutely, categorically not there. Standard fMRI scans these days record using a resolution of 2-3mm. A cubic volume of brain tissue 2-3mm on each side probably contains several hundred thousand neurons, each of which may be responding to different stimuli, or involved in different processes. fMRI is therefore a very, very blunt tool, in terms of capturing the fine detail of what’s going on. It’s like trying to take apart a swiss watch mechanism when the only tool you have is a giant pillow, and you’re wearing boxing gloves. A further complication is that we still have so much to learn about exactly how and where memories are actually represented and stored in the brain. To accurately capture memories, thoughts, and even dreams, we’ll have to use a much, much better brain-recording technology. It’s potentially possible someday (and that ‘someday’ might even be relatively close), but the technology simply hasn’t been invented yet.

So, the idea that you can read someone’s mind in a single session, and preserve their treasured memories on a computer hard disk for future playback is simply hogwash right now. I’m as excited by the possibilities in this area as the next geek, but it’s just not possible right now. Dr Marks is charging people $2000 a pop for a pretty useless service, no matter how optimistic he might be about some mythical kind of future mind-reading playback device.

NB. I’ve got a lot more to say about MMT’s other services too, but this post’s got a bit out of hand already, so I’ll save that for a future one…

More eye-wateringly egregious neuromarketing bullshit from Martin Lindstrom

Martin Lindstrom is a branding consultant, marketing author, and (possibly because that wasn’t quite provoking enough of a violently hateful reaction in people) also apparently on a one-man mission to bring neuroscience into disrepute. He’s the genius behind the article in the New York Times in 2011 (‘You love your iPhone. Literally’) which interpreted activity in the insular cortex (one of the most commonly active areas in a very wide variety of tasks and situations) with genuine ‘love’ for iPhones. This was a stunningly disingenuous and simple-minded example of reverse inference and was universally derided by every serious commentator, and many of the more habitually rigour-phobic ones as well.

Unfortunately, it appears his reputation as a massive bull-shitting neuro-hack hasn’t quite crossed over from the neuroscience community into the mainstream, as I realised this weekend when I settled down to watch The Greatest Movie Ever SoldMorgan Spurlock’s documentary about branding, product placement and the general weirdness of the advertising world is generally excellent, however, it unfortunately makes the mistake of wheeling on Lindstrom for a segment on neuromarketing. You can see his piece from the movie in the video below:

Lindstrom conducts a fMRI scan with Spurlock as the subject, and exposes him to a variety of advertisments in the scanner. Fair enough, so far. Then however, Lindstrom explains the results using a big-screen in his office. The results they discuss were apparently in response to a Coke commercial. According to Lindstrom the activation here shows that he was “highly engaged” with the stimulus, and furthermore was so “emotionally engaged” that the amygdala which is responsible for “fear, and the release of dopamine” responded. Lindstrom then has no problem in making a further logical leap and saying “this is addiction”.

Screen Shot 2013-08-11 at 22.01.26

Needless to say, I have a somewhat different interpretation. Even from the shitty low-res screenshot grabbed from the video and inserted above I can tell a few things; primarily that Lindstrom’s pants are most definitely on fire. Firstly (and least interestingly) Lindstrom uses FSL for his fMRI analysis, but is using the crappy default results display. Learning to use FSLView would look much more impressive Martin! Secondly, from the very extensive activity in the occipital lobe (and elsewhere), I’m able to pretty firmly deduce that this experiment was poorly controlled. fMRI experiments rely on the method of subtraction, meaning that you have two close-but-not-identical stimuli, and you subtract brain activity related to one from the other. As in this case, say that you’re interested in the brain response to a Coca-Cola commercial. An appropriate control stimulus might therefore be, say, a Pepsi commercial, or even better, the Coke commercial digitally manipulated to include a Pepsi bottle rather than a Coke one. Then you subtract the ‘Pepsi’ scan from the ‘Coke’ scan, and what you’re left with is brain activity that is uniquely related to Coke. All the low-level elements of the two stimuli (brightness, colour, whatever) are the same, so subtracting one from the other leaves you with zero. If you just show someone the Coke advert and compare it to a resting baseline (i.e. doing nothing, no stimulus) you’ll get massive blobs of activity in the visual cortex and a lot of other places, but these results will be non-specific and not tell you anything about Coke – the occipital lobe will respond to absolutely any visual stimulus.

By the very widespread activity evident in the brain maps above, it appears that this is exactly what Lindstrom has done here – compared the Coke advert to a resting baseline. This means the results are pretty much meaningless. I can even make a good stab at why he did it this way – because if he’d done it properly, he’d have got no results at all from a single subject. fMRI is statistically noisy, and getting reliable results from a single subject is possible, but not easy. Gaming the experiment by comparing active stimuli to nothing is one way of ensuring that you get lots of impressive-looking activation clusters, that you can then use to spin any interpretation you want.

fMRI is a marvellous, transformative technology and is currently changing the way we view ourselves and the world. Mis-use of it by opportunistic, half-educated jokers like Lindstrom impoverishes us all.

Neuromarketing gets a neurospanking

A brief post today just to point you towards a couple of recent articles which pull down the pants of the neuromarketing business and give it a thorough neurospanking (© @psychusup).

The first one is a Q&A with Sally Satel, one of the authors of the recently-published and pretty well-received book Brainwashed. Sally makes some good points about ‘neuroredundancy’ – that brain scan experiments often don’t really tell you anything you don’t already know. Read it here. There’s also a good article on Bloomberg by Sally and Scott Lillienfield here.

The other one is an article at Slate.com by associate-of-this-parish Matt Wall, which focuses particularly on a recent trend in neuromarketing circles – the use of cheap ‘n’ nasty EEG equipment and (potentially) dodgy analysis methods in order to generate  sciencey-looking, but probably fairly meaningless results. Read that one here.

That’s all for now – I’ll be back with a proper post soon(ish).