Category Archives: Commentary

The NeuroBusiness 2015 Conference

51CWieSlleL._SY344_BO1,204,203,200_Last week there was a conference in Manchester titled NeuroBusiness 2015,  billed as the ‘first of its kind in the UK’. Actually the ‘neuroleadership‘ guys have been doing similar stuff for ages. They have some serious conceptual issues, and there’s also an excellent piece on TheConversation.com about neuro-quackery in business and education. NeuroBusiness 2015 took it to a whole new level though. On the front page of their website there’s a quote from Professor Dame Nancy Rothwell, President and Vice Chancellor of the University of Manchester, which reads:

“I very much welcome the opportunity to bring neuroscientists together with business.”

This is a noble aim, but apparently no-one let the conference organisers know about it, as browsing the list of speakers quickly reveals that there were no neuroscientists invited. None. The closest we have is Dr Jenny Brockis who appears to be a medic who found a more lucrative calling in Brain-fitness-related motivational speaking (*yawn*) and Dr Paul Brown, a clinical psychologist who has…. let’s say, an ‘interesting’ background, with various academic appointments in South-East Asia. Dr Brown is also the author of the book ‘Neuropsychology for Coaches’, the title of which suggests he doesn’t really know what the term ‘neuropsychology‘ refers to. Unless of course it’s a book for American Football coaches who have to deal with regular traumatic brain injuries in their players, which I doubt.

Anyway, I’ve no idea if the conference was a roaring success or not, since, as a neuroscientist, I wasn’t invited. What I do know is that it turned into an utter debacle on Twitter. Conference attendees started tweeting nonsensical things like:

“Hack your brains dopamine to become addicted to success!”

or

“Men’s brains fire back to front, women’s fire side to side. That’s why women multi task well”

…and the neuroscientists on Twitter quickly and gleefully piled on with sarcasm, jokes and general rubbishing. At one point it became really rather difficult to detect which were genuine #neurobusiness2015 tweets and which were fake sarcastic ones. I did notice there were significantly less tweets from the conference on day 2 – was some announcement made? It was all jolly good fun for us neuroscientists, but I did start to feel a bit sorry for the conference organisers at some point.

However, I have a suggestion. One which would prevent something like this happening again. If any of the conference organisers happen to be reading this, my suggestion for NeuroBusiness 2016 (if it happens) is this:

INVITE SOME NEUROSCIENTISTS. People who actually know something about the brain. Some of us are actually quite engaging speakers, who would relish the opportunity to emerge from our dark basement labs, and spend a day interacting with normal people. We’re not all massive nerds, obsessed with the abstract minutiae of our particular area of research. Well… I mean, we actually are, but that doesn’t mean we can’t function normally as well. Some of us even like to think about how neuroscience can be applied in every-day life too. Just have a look around at people’s CVs and publications, and pick a few good ones. Or have a look on Speakezee, or even just send me an email through this site, and I’ll send you a list of suggestions.

Business people – it’s great that you’re interested in the brain. We get it. We are too, that’s why we do what we do. Unfortunately there are a lot of people out there who have realised that sticking the neuro- prefix on some old load of bollocks is a jolly good whizz-bang way to make loads of money on the motivational speaking circuit. If your computer breaks, you wouldn’t call a motivational speaker, would you? You’d call an IT expert. If you want to know about the brain – ask a neuroscientist.

Commercial fMRI neurobollocks – no, you cannot record your dreams (yet).

Cash cow? No.

Cash cow? I wish! But no.

With thanks to Micah Allen (@neuroconscience) for pointing this one out.

My day job is as an fMRI (functional magnetic resonance imaging) researcher, so you can imagine how tickled I was when I came across a brand-new neurobollocks-peddler who’s chosen to set up shop right on my patch!

Donald H. Marks is a New Jersey doctor who in 2013 set up a company called ‘Millennium Magnetic Technologies’. Readers old enough to remember Geocities sites from the mid-90s will probably derive some pleasant nostalgia from visiting the MMT website, which is refreshingly unencumbered by anything so prosaic as CSS. Anyway, MMT offer a range of services, under the umbrella of “disruptive patented specialty neuro imaging and testing services”. These include the “objective” documentation of pain, forensic interrogation using fMRI, and (most intriguingly) thought and dream recording.

This last one is something that’s expanded on at some length in a breathlessly uncritical article in the hallowed pages of International Business Times (no, me neither). According to the article:

“The recording and storing of thoughts, dreams and memories for future playback – either on a screen or through reliving them in the mind itself – is now being offered as a service by a US-based neurotechnology startup.

Millenium Magnetic Technologies (MMT) is the first company to commercialise the recording of resting state magnetic resonance imaging (MRI) scans, offering clients real-time stream of consciousness and dream recording.”

And he does this using his patented (of course) ‘cognitive engram technology’, and all for the low, low price of $2000 per session.

It’s clear from the article and the MMT website that he’s using some kind of MVPA (multi-voxel pattern analysis) technique with the fMRI data. This technique first came up about 10 years ago, and is based on machine learning algorithms. Briefly, an algorithm is trained, and ‘learns’ to distinguish differences in a set of fMRI data. The algorithm is then tested with a new set of data to see if what it learned can generalise. If the two sets of data contain the same features (e.g.the participant was exposed to the same stimulus in both scans) the algorithm will identify bits of the brain that contain a consistent response. The logic is that if a brain area consistently shows the same pattern of response to a stimulus, that area must be involved in representing some aspect of that stimulus. This family of techniques has turned out to be very useful in lots of ways, but one of the most interesting applications has been in so-called ‘brain-reading’ studies. In a sense, the decoding of the test data makes predictions about the mental state of the participant; it tries to predict what stimulus they were experiencing at the time of the scan. A relatively accessible introduction to these kinds of studies can be found here.

So, the good Dr Marks (who, by the way, has but a single paper using fMRI to his name on Pubmed) is using this technology to read people’s minds. However, needless to say, there are several issues with this. Firstly, to generate even a vaguely accurate solution, these algorithms generally need a great deal of data. The dream decoding study that MMT link to on their website (commentary, original paper) required participants to sleep in the MRI scanner in three-hour blocks, on between seven and ten occasions. Even after all that, the accuracy of the predictive decoding (distinguishing between two pairs of different stimuli, e.g. people vs. buildings) was only between 55 and 60%. Statistically significant, but not terribly impressive, given that the chance level was 50%.

My point here is not to denigrate this particular study (which is honestly, a pretty amazing achievement), it’s to make the point that this technology is not even close to being a practical commercial proposition. These methods are improving all the time, but they’re still a long way from being reliable, convenient, or robust enough to be a true sci-fi style general-purpose mind-reading technology.

This apparently doesn’t bother Dr Marks though. He’s charging people $2000 a session to have their thoughts ‘recorded’ in the vague hope that some kind of future technology will be able to play them back:

“The visual reconstruction is kind of crude right now but the data is definitely there and it will get better. It’s just a matter of refinement,” Marks says. “That information is stored – once you’ve recorded that information it’s there forever. In the future we’ll be able to reconstruct the data we have now much better.”

No. N. O. No. The data is absolutely, categorically not there. Standard fMRI scans these days record using a resolution of 2-3mm. A cubic volume of brain tissue 2-3mm on each side probably contains several hundred thousand neurons, each of which may be responding to different stimuli, or involved in different processes. fMRI is therefore a very, very blunt tool, in terms of capturing the fine detail of what’s going on. It’s like trying to take apart a swiss watch mechanism when the only tool you have is a giant pillow, and you’re wearing boxing gloves. A further complication is that we still have so much to learn about exactly how and where memories are actually represented and stored in the brain. To accurately capture memories, thoughts, and even dreams, we’ll have to use a much, much better brain-recording technology. It’s potentially possible someday (and that ‘someday’ might even be relatively close), but the technology simply hasn’t been invented yet.

So, the idea that you can read someone’s mind in a single session, and preserve their treasured memories on a computer hard disk for future playback is simply hogwash right now. I’m as excited by the possibilities in this area as the next geek, but it’s just not possible right now. Dr Marks is charging people $2000 a pop for a pretty useless service, no matter how optimistic he might be about some mythical kind of future mind-reading playback device.

NB. I’ve got a lot more to say about MMT’s other services too, but this post’s got a bit out of hand already, so I’ll save that for a future one…

The power of a well-chosen image; EEG measures of brain activity and exercise

This picture:

Bd36lfSCUAAWG3h.png-large
…occasionally does the rounds on Twitter, often spurred by tweets from the kind of evidence-phobic accounts that publish whole lists of mind-blowing ‘facts’, at least 50% of which are made up. This picture has also spurred about a billion blog posts (like here, here and here), somewhat unsurprisingly, written by the kind of people who like to get their scientific evidence from a single image on Twitter.

So what’s the problem here? What the image appears to suggest at face value is that brain activity is increased after a short bout of exercise (a 20-minute walk). Sounds reasonable, right? We know that exercise has various effects on brain function, and exercise in general is definitely a good thing, now that the Western world is suffering from massive rates of obesity, diabetes, etc. I really don’t have a problem with the message here, more in the way that it’s presented.

The brain images are clearly from an EEG, but beyond that, there’s very little information in the images about what it actually represents. There are lots and lots of different things you can measure with EEG technology, such as the P300, Error-Related Negativity, C1 and P1, or much slower neural oscillations across a wide range of frequency bands. We have no information about what particular measure this image is describing. Secondly, we have no information about what the colours mean. Heat-map colour scales on brain images like this often represent statistical values (usually or scores), which is a convenient way of representing a large amount of numerical data in a visual-friendly format. Here though, we have no colour-scale information, so we have no idea what the colours represent.

Here’s some brain images I just created from some MRI data I had laying around. Took about three minutes.

brain_threshold

Big difference, right? Somewhat counter-intuitively, the left and right images above are actually the exact same functional brain data, all I did to create the right one was to lower the statistical threshold on the colour-overlay, to essentially say “Show me more results, I don’t care if they’re statistically reliable or not.” People who do this kind of work are very clued-in to these kinds of issues, and would always look for a colour-scale on these kinds of images in research papers. Clearly though the general public aren’t that conversant with statistical issues in brain imaging, because why would they be?

What we do have in the original image is an attribution to a guy called Chuck Hillman at the University of Illnois. Dr Hillman appears to be a perfectly respectable scientist, performing some perfectly respectable research focussing on the interaction between exercise and the brain. I have absolutely no problem with Dr Hillman or his obviously very worthwhile research. Looking through his articles, I can’t find an image which matches the one at the top of the post, although this paper  (PDF, Figure 2, page 548) does contain one that’s somewhat similar. That image shows the amplitude of the p300 wave during a particular task, after a period of reading and a period of exercise. Unfortunately the colour-scale here is in raw units of EEG signal (micro-volts) so it’s not totally clear if that represents a statistically-significant difference or not. If anyone can work out where the original image at the top comes from, please let me know in the comments!

As something of an aside, is an increase in brain activity necessarily a positive thing? Oxidative stress can potentially occur as the result of an increase in brain metabolism, and oxidative stress has been implicated as a potential causal factor in a huge variety of problems, from cancer to Alzheimer’s. One could even argue that lower brain activity is better because it indicates a more efficient use of cognitive resources; performing the same task, with less activity, equals greater efficiency. Although using the concept of ‘efficiency’ in this way is currently fairly controversial.

The essential point here is that when images like this are presented in academic papers or presentations, they come packaged with a whole host of caveats, qualifications, and additional information. Of course, scientists often try to make visually arresting images in order to present their results with maximum impact and clarity, and (as long as they don’t cheat in some way) that’s entirely appropriate, and indeed useful. The problem comes when someone else takes those images, strips them of this essential contextual information and presents them uncritically, often in order to further their own agenda or aims. Without the context, these images become pretty much meaningless. If this kind of thing happened to some result from my own research, I’d be pretty embarrassed about it. As ever, a critical approach to this kind of un-critically presented ‘evidence’ is crucial.

 

Transcranial direct-current stimulation – don’t try it at home

"Many Shubs and Zulls knew what it was to be roasted in the depths of the Sloar that day I can tell you."

“Many Shubs and Zulls knew what it was to be roasted in the depths of the Sloar that day I can tell you.”

I’ve written before about tDCS and in particular the device produced by a company called foc.us; a company marketing a tDCS device to gamers. As a brief recap, tDCS involves passing a low-level electric current through your brain, and thereby attempting to stimulate particular regions of the cortex in order to enhance particular functions. Academics have been using this (and similar) method for a while now, and showing some interesting effects in all kinds of motor, sensory and cognitive domains (for a fairly broad review see here; PDF).

When academics perform this procedure on their experimental subjects for the purposes of research they have to get clearance from an ethical review board first, and they observe strict limits in order to ensure the safety of their participants, both in terms of the time they stimulate for, and the amount of electrical current they use. However, there is a community of amateur tDCS enthusiasts, who build their own equipment and zap their brains at home. If this sounds like a spectacularly bad idea, you’d be dead right. These guys (and let’s face it, it’s usually guys) naturally aren’t bound by the same safety rules; the only limit is their own stupidity.

TDCS appears to be becoming more mainstream, with commercial products like the foc.us headset and positive write-ups in media outlets (like this one and this one) helping to raise the profile of what has been up until now, a pretty niche activity. This BBC report focuses on the military applications of the technology and proclaims that the US military are ‘very interested in its potential’. Yeah, well… the US military also ran a 20-year research program into remote viewing and other psychic phenomena (only discontinued in 1995!) so let’s not put too much faith in their ability to spot obvious bollocks.

The point I want to get across here is that DIY-tDCS is not only pretty unlikely to actually do anything useful, but can also be potentially extremely dangerous. I know, right? Who’d have thought that passing electric currents through your brain might be a problem? The tDCS sub-reddit page is full of horror-stories ranging from people suffering electrode burns (like this guy) to this story of a user suffering crippling anxiety, panic attacks and depression for more than a year after tDCS. Whether the tDCS actually caused these fairly extreme symptoms in this particular case is somewhat debatable, and probably unknowable, but the point is that relatively severe adverse events can, and do happen with these devices. Most worryingly of all, there’s a report here on the electrical safety of the commercial foc.us device, which suggests that it doesn’t perform in the manner it specifies in terms of regulating the voltage, and can cause skin burns. This user claims to have suffered severe migraine-like pain after a session with the foc.us device.

To sum up:

Do not pass electrical currents through your head! It is a bloody stupid thing to do.

Seriously, if you want to give yourself some kind of an ‘edge’ in gaming, or studying, or whatever, just have a quadruple espresso – much safer and more effective.

Thanks to @neuroconscience for pointing out the tDCS horror-stories on Reddit.

 

Become a Cognitive Behavioural Therapist for £39? Pfffftttt….

Time Out London is currently offering this absolutely bloody ridiculous offer on their site: “93% off a Cognitive Behavioural Therapy Diploma”. Actually worth a “whopping £599” it’s currently available for the low, low price of £39!

Pretty amazing. Especially when it turns out that the course is being run by the NLP Centre of Excellence, and NLP as we all know, is a pile of incoherent and festering ordure. Also, a closer reading of the text reveals that the course is just a few e-books.

Cognitive Behavioural Therapy (CBT) is a serious set of therapeutic technologies, with a substantial evidence base, useful for treating several classes of mental health issues. It usually takes several years to become a CBT therapist – the most common route is to complete a clinical doctorate training program, and then do additional specialist training afterwards. In the UK, CBT therapists are registered with a professional body called the BABCP, and are required to undergo regular peer-supervision, and conduct their therapy sessions within the guidelines of the association.

Anyone who thinks they can become a CBT therapist by reading a couple of manuals from some NLP charlatans is in serious need of therapy themselves. I bet the e-books aren’t even about CBT; it’s probably the same old NLP twaddle warmed over again with a new title. This is an out-and-out scam, plain and simple.

You keep using this word ‘neuroplasticity’. I do not think it means what you think it means.

mp2

So, I wanted to write a post about how the word ‘neuroplasticity’ is  the current neuro-bullshitter’s favourite big sciencey-sounding word to throw around these days. I was going to explain how it was actually such a broad umbrella term as to be pretty meaningless, and talk about some things like LTP and synaptogenesis in the hippocampus which (in contrast) are precise, well-defined terms, and fascinating processes, and how your brain is changing in a ‘plastic’ manner even as you read these words. It was really going to be a great post.

Unfortunately (as so often seems to happen), it turns out that the mighty Vaughan Bell beat me to it by a scant three years with this typically outstanding post on mindhacks.com. So. I guess you should all just go and read that instead, and I’ll have to be content with my standard operating procedure and take the piss out of some quacks instead.

The ‘About the Science’ section of the Brain Balance Centers main website has some awesomely meaningless language, that manages to work in some other big sciencey-sounding word too:

“It was once thought that the brain was static, unable to grow or change. But extensive research and in depth study of epigenetics has shown that it’s remarkably adaptable, able to create new neural pathways in response to stimulus in the environment, a branch of science called neuroplasticity.”

Ooh – epigenetics, and neural pathways. Fans of meaningless brain cartoons should definitely check out that site too, their disconnected vs. connected diagram is fabulous.

The Lumosity website (a brain-training company) has some pretty choice language too:

“But when neuroplasticity’s potential is thoughtfully and methodically explored, this physical reorganization can make your brain faster and more efficient at performing all manner of tasks.”

There are lots of other examples I could paste in here. I spend a fair amount of time looking at these companies’ sites and I’ve come to the conclusion that any mention of the word ‘neuroplasticity’ is basically a massive red-flag. People are very fond of using it to promote these things, but mostly their arguments boil down to “Because: neuroplasticity!”, which as Vaughan explained so eloquently, doesn’t mean anything at all without a whole additional layer of explanation, refinement and qualification.

So – a top tip, when you see the word ‘neuroplasticity’ think ‘bollocks’ instead.  99% of the time you’ll be absolutely dead-on.

The worst neurobollocks infographics on the web

Regardless what you think of infographics (and personally, I think they’re largely a pustulent, suppurating boil on the bloated arse of the internet) there are some good, useful ones out there. However, these are vastly outweighed by the thousands of utterly ghastly, misleading, poorly-referenced and pointless ones.

Because I’ve been on holiday for the last week, my levels of rage and misanthropy have dropped somewhat from their usual DEFCON-1-global-thermonuclear-war-the-only-winning-move-is-not-to-play levels, so I thought trying to find the absolute worst neuroscience-related infographics on the web might be a good way to top the vital bile reserves back up again. And oh boy, was I right. There are some doozies.

First up is this purple and blue monstrosity titled ’15 things you didn’t know about the brain.’ Here we learn (amongst other howlers) that the capacity of the brain is 4 terabytes, men process information on the left side while women use both sides, and:

brain-infoWomen are more emotional because they have a bigger limbic system? Are you fucking kidding me? Also, it turns out that ‘Exploding head syndrome’ (also mentioned above) is a real thing, well… kind-of a real thing. Well, actually not really a real thing at all.

Moving swiftly on then, this one titled ‘The neurology of gaming’ is perhaps less laughable, but more dangerous in that it makes a lot of assertions without much to back them up. Nearly all the content here is currently pretty contentious and while some of it might possibly be correct, it’s all presented as fact, with no nuance at all, and only the most cursory attempt at referencing. It also has this brilliantly awful graphic at the bottom:

Neurology-of-Gaming-800What are those little coloured blocks supposed to represent, exactly? Numbers of male and female gamers? Activated brain voxels? Absolutely fucking nothing at all?

The best (by which I mean, worst, naturally) brain-related infographics though, are related to the bullshit left/right-brain myth. This is the idea that some people are more ‘left-brained’ and others are ‘right-brained’, and this has some relationship to their personality, preferences etc., or the idea that some types of information are processed by one hemisphere alone. This is an absolute beauty, which manages (extra points!) to mix up some learning-styles nonsense in there as well for good measure. Amongst many other obviously made-up bits of foolish drivel, it claims that the right-brain is psychic(!), the left likes classical music while the right prefers rock, and the left likes dogs, while the right likes cats.

Left and Right Brain

However, the top prize goes to this pitiful effort, which is chock-full of steaming great turds, but probably the best (worst) bit is reproduced below:

leftrightbrain_infographic

So, you can see which-side-brained you are by which nostril happens to be more blocked up, and men only have brain activity in the left hemisphere. Whoever wrote this deserves to be first up against the wall with a blindfold on when the neuro-revolution comes.

Just to finish on a positive note, while I was scouring the darkest pits of the interwebz to find these, I came across this lovely, helpful little comic about brain development, which manages to be informative, accurate and entertaining. Turns out good ones do exist after all, but infographics (like pretty much everything else) most definitely appear to obey Sturgeon’s Law. 

More eye-wateringly egregious neuromarketing bullshit from Martin Lindstrom

Martin Lindstrom is a branding consultant, marketing author, and (possibly because that wasn’t quite provoking enough of a violently hateful reaction in people) also apparently on a one-man mission to bring neuroscience into disrepute. He’s the genius behind the article in the New York Times in 2011 (‘You love your iPhone. Literally’) which interpreted activity in the insular cortex (one of the most commonly active areas in a very wide variety of tasks and situations) with genuine ‘love’ for iPhones. This was a stunningly disingenuous and simple-minded example of reverse inference and was universally derided by every serious commentator, and many of the more habitually rigour-phobic ones as well.

Unfortunately, it appears his reputation as a massive bull-shitting neuro-hack hasn’t quite crossed over from the neuroscience community into the mainstream, as I realised this weekend when I settled down to watch The Greatest Movie Ever SoldMorgan Spurlock’s documentary about branding, product placement and the general weirdness of the advertising world is generally excellent, however, it unfortunately makes the mistake of wheeling on Lindstrom for a segment on neuromarketing. You can see his piece from the movie in the video below:

Lindstrom conducts a fMRI scan with Spurlock as the subject, and exposes him to a variety of advertisments in the scanner. Fair enough, so far. Then however, Lindstrom explains the results using a big-screen in his office. The results they discuss were apparently in response to a Coke commercial. According to Lindstrom the activation here shows that he was “highly engaged” with the stimulus, and furthermore was so “emotionally engaged” that the amygdala which is responsible for “fear, and the release of dopamine” responded. Lindstrom then has no problem in making a further logical leap and saying “this is addiction”.

Screen Shot 2013-08-11 at 22.01.26

Needless to say, I have a somewhat different interpretation. Even from the shitty low-res screenshot grabbed from the video and inserted above I can tell a few things; primarily that Lindstrom’s pants are most definitely on fire. Firstly (and least interestingly) Lindstrom uses FSL for his fMRI analysis, but is using the crappy default results display. Learning to use FSLView would look much more impressive Martin! Secondly, from the very extensive activity in the occipital lobe (and elsewhere), I’m able to pretty firmly deduce that this experiment was poorly controlled. fMRI experiments rely on the method of subtraction, meaning that you have two close-but-not-identical stimuli, and you subtract brain activity related to one from the other. As in this case, say that you’re interested in the brain response to a Coca-Cola commercial. An appropriate control stimulus might therefore be, say, a Pepsi commercial, or even better, the Coke commercial digitally manipulated to include a Pepsi bottle rather than a Coke one. Then you subtract the ‘Pepsi’ scan from the ‘Coke’ scan, and what you’re left with is brain activity that is uniquely related to Coke. All the low-level elements of the two stimuli (brightness, colour, whatever) are the same, so subtracting one from the other leaves you with zero. If you just show someone the Coke advert and compare it to a resting baseline (i.e. doing nothing, no stimulus) you’ll get massive blobs of activity in the visual cortex and a lot of other places, but these results will be non-specific and not tell you anything about Coke – the occipital lobe will respond to absolutely any visual stimulus.

By the very widespread activity evident in the brain maps above, it appears that this is exactly what Lindstrom has done here – compared the Coke advert to a resting baseline. This means the results are pretty much meaningless. I can even make a good stab at why he did it this way – because if he’d done it properly, he’d have got no results at all from a single subject. fMRI is statistically noisy, and getting reliable results from a single subject is possible, but not easy. Gaming the experiment by comparing active stimuli to nothing is one way of ensuring that you get lots of impressive-looking activation clusters, that you can then use to spin any interpretation you want.

fMRI is a marvellous, transformative technology and is currently changing the way we view ourselves and the world. Mis-use of it by opportunistic, half-educated jokers like Lindstrom impoverishes us all.

Neuromarketing gets a neurospanking

A brief post today just to point you towards a couple of recent articles which pull down the pants of the neuromarketing business and give it a thorough neurospanking (© @psychusup).

The first one is a Q&A with Sally Satel, one of the authors of the recently-published and pretty well-received book Brainwashed. Sally makes some good points about ‘neuroredundancy’ – that brain scan experiments often don’t really tell you anything you don’t already know. Read it here. There’s also a good article on Bloomberg by Sally and Scott Lillienfield here.

The other one is an article at Slate.com by associate-of-this-parish Matt Wall, which focuses particularly on a recent trend in neuromarketing circles – the use of cheap ‘n’ nasty EEG equipment and (potentially) dodgy analysis methods in order to generate  sciencey-looking, but probably fairly meaningless results. Read that one here.

That’s all for now – I’ll be back with a proper post soon(ish).

A brief experiment for Tappers

Yeah, I'm a Stargate fan. What of it? That Sanctuary thing was dreadful though.

Yeah, I’m a Stargate fan. What of it? That Sanctuary thing was dreadful though.

This one’s for all the tappers out there – people who believe in invisible energy meridians that are distributed throughout the body, and that stimulating the end-points of them can lead to positive effects. My contention is that tapping actually has no effect at all on the body’s energy meridians, because the body doesn’t have energy meridians; they don’t exist. My alternative hypothesis is that the simple act of tapping while reciting tapping ‘scripts’ may simple serve to distract you from the issue at hand.

I want to propose a little experiment to test this. The next time you feel the urge to tap, do some ‘sham’ tapping instead. What I mean is, do some tapping that shouldn’t work. I notice that none of these diagrams of tapping points feature any points below the waist*, so tap yourself on the leg instead. While doing that, recite something else, rather than your normal tapping script. Anything you like; a nice poem, your shopping list, whatever. For extra nerd-cred points you could try the Bene Gesserit litany against fear. If I’m right, and it’s the simple act of performing the ritual which is responsible for the (putative) effects, then this routine should be as effective as your normal one.

Of course, this is a highly unscientific experiment in lots of ways. Ideally we’d have a large group of people, subject half of them to a course of ‘real’ tapping and the other half to a course of ‘sham’ tapping, and then look at the different effects. Crucially, the people would be ‘blind’, in that they wouldn’t know anything about tapping or what the hypotheses and aims of the experiment were. If you’re already a committed tapper, you’re probably fairly invested in believing that tapping works, and as a corollary, are perhaps unlikely to be fully invested in my ‘sham’ tapping protocol. Nevertheless, humour me, and give it a go with an open mind. I’d be very interested to hear your impressions.

*Although there’s one obvious bodily end-point to stimulate down there… HUUUURRRRR.