The (statistical) power of stories

We humans love a good story. We even love a bad story (evidence: Hollyoaks is still on television). We all have them; we all tell them. Storytelling is like magic, according to Esquire writer Chris Jones.

Can stories make us better too?

The other week, Canadian researchers published a randomised control trial testing whether stories were more effective than standard information sheets at helping parents who bring children with croup to a hospital emergency department (ED).

Upon entering treatment rooms in the ED, parents were randomly placed into ‘treatment’ or ‘control’ groups. Two hundred and five parents in the control group were given standard information sheets containing dry, factual details about croup.

The 208 parents in the treatment group received a story booklet. If you’re interested, the same team describe how they made the story books in a separate, freely-available study. The booklets relayed stories from parents whose children once had croup. The idea was to provide information from the point of view of other parents’ experiences.

It turns out the stories didn’t reduce parental anxiety, though parents did report a faster time for their child’s recovery in follow-up interviews. Parents receiving the story books also experienced greater levels of regret about bringing their child to the ED—perhaps after learning that most cases of croup clear up without needing treatment.

Croup mainly affects children. The main symptoms are a barking cough and grating sounds when breathing in, known as stridor.

One of the things that drew me to this study is that it comprehensively and honestly reports how the treatment (the story booklets) didn’t do what the researchers hoped—didn’t confirm hypotheses. Reading the stories didn’t significantly affect parental anxiety, didn’t change amount of knowledge about croup and didn’t change satisfaction levels regarding treatment compared to parents who received the information sheets.

It’s all too rare that negative results like this are published. Negative results allow scientists to rule something out with confidence. Surely we’re doing something wrong if we don’t know what’s false as much as we know what’s true?

It’s possible—due to a lack of knowledge about two important concepts: statistical significance and statistical power—that researchers can’t work out which of their hypotheses are true but they falsely reject them, or which hypotheses are false but they falsely accept them. Certainly, researchers cannot say this with as much certainty as they’d like (or, indeed, with as much certainty as they claim).

In this study, to be sure of their claims, the authors worked out in advance how many people they’d need in the treatment and control groups in order to detect an effect (such as a change of three points on the anxiety level scale).This is called a power analysis and is good science but, unfortunately, too rare an approach. (Here, I’m wagging fingers mainly towards the social sciences and the sub-disciplines I’m most familiar with: those involving crossings between anthropology, psychology and economics.)

The Economist recently caused a kerfuffle with an article bemoaning the state of science. It was a sweeping, ‘dudes, seriously, sort it out’ kind of a piece that criticised several pillars of science: self-policing (such as peer review and replicating other people’s studies); careerism; and inappropriate use of statistics.

The latter has played on my mind—not least since I’m currently attempting proper analyses of my own. I’m becoming much more aware of need for statistical power—knowing the probability that some of your hypotheses are true but your analyses lead you to throw them out. On the flip side, a surprisingly large number of untrue hypotheses may squeeze under the seemingly-low barrier of statistical significance.

These are, despite appearances, basic statistical concepts that nevertheless get ignored much of the time.

Research eventually (and hopefully) enters the public consciousness—things your friends and family believe; things your government turns into policy. Without the proper statistical tests, what we are told and what we come to believe may be phenomena that just don’t exist.

Ultimately, that’s the difference between good and bad storytelling.


Reference:
Hartling, Scott, Johnson, Bishop and Klassen – 2013. A randomized controlled trial of storytelling as a communication tool. PLoS ONE http://dx.doi.org/10.1371/journal.pone.0077800