Friday, January 27, 2012

Evidence-based ethics committee



Could we evidence our way to a better research ethics system? A bit more formal evaluation has always seemed to me like a very good idea. When I posted the version of this cartoon in 2012, though, I didn't know of any controlled evaluation - despite the critical importance of research ethics and the potential for ethics regulation processes to do harm themselves.

I updated the cartoon when I saw a controlled study related to research ethics for the first time. Mary Dixon-Woods and colleagues studied adding an ethics officer. They wanted to know if that could make the process more efficient and improve the quality of outcomes.

It didn't go exactly according to plan - 31% of the time there was no contact between the ethics officer and the committee before the meeting. There wasn't an appreciable impact on outcomes - and it didn't speed up the process either. Hats off to all concerned: we're a little less ignorant about research ethics committees than we were before.

Dixon-Woods cited a scoping review that showed how thin on the ground solid knowledge about what could make research ethics review more reliably effective. Here's hoping this new study spurs copycats!



Disclosure: I spent years on national research ethics committees in Australia, but don't on any now. I am a member of the human ethics advisory group for PLOS One, and was a member of the BMJ's ethics committee for several years.

Update: 3 September 2016 

Tuesday, January 10, 2012

Heaven's Department of Epidemiology



Watch out for risk's magnifying glass - and cut your risk of being tripped up by 82%!

Whenever you see something tripling - or halving - a risk, take a moment before you let the fear or optimism sink in.

Relative risks are critically important statistics. They help us work out how much we might benefit (or be harmed) by something. But it all depends on knowing your baseline risk - your risks to start with.

If my risk is tiny, then even tripling or halving it is only going to make a minuscule difference: a half of 0.01% isn't usually a shift I'd even notice. Whereas if my risk is 20%, tripling or halving could be a very big deal. Unless you know a great deal about the risks in question - or your own baseline risk, you need more than a relative risk to make any sense out of data.

There's a good introduction to absolute and relative risks at Smart Health Choices


This is one of the 5 shortcuts to keep data on risks in perspective, at Absolutely Maybe.


Cartoon and content updated on 3 June 2017: This post was originally the cartoon only, from my blog post for the British Journal of Sports Medicine.