I was talking to a student today who was getting exercised about response rate to his questionnaire. He is aiming at around 200 individuals and hoping for around a 70% response rate. I hope he gets it, I really do but.... if he doesn’t is it REALLY such a big deal? Well, you might say yes, low response size means that the results are unreliable, we cannot extrapolate them to the wider population and that extreme responses loom larger than they would normally do and cause bias. I agree that all of these points matter if one is developing a cure for cancer or mapping the human genome to develop smart drugs but if all you want to know is how physiotherapists treat back pain, or what nurse prescribers think about their role, then surely then quality of information available is, as, if not more, important that the size of the sample?
I’m not the first to think this, in 1997, Templeton et.al were making the case that as long as care is taken to ensure that the group surveyed is representative of the larger population, then sample size is not as important as we are lead to believe. Perhaps it is the case that educators and supervisors should spend more time with students discussing about how they are selecting their target sample than dismissing useful research data because “you only report a 25% response rate”. Often such a comment highlights a lack of understanding in the supervisors or journal reviewers. I struggled to find a platform for a research paper that had a response rate of 16% - not great I admit - but that actually translated into well over 570 respondents, all of whom could be argued to be representative of the target population of interest. The study provided new insight in an under researched topic and could have disappeared into the Great Academic Marsh of Indifference, had my co-author and I not persevered until we got it into print.
Chalmers enlarged upon this point in an editorial for the Royal Statistical Society in 2006 when he suggested that under reporting of so-called ‘poor response studies’ was leading to a publication bias. He made the point that much research (and I think this is particularly true of most postgraduate student, and Doctoral research) is done to increase knowledge on a specific topic and this information should be shared with the wider world. As we move more towards the publication of systematic reviews and increasing data synthesis across different disciplines the time may be right to start to focus upon the place of smaller studies in the wider knowledge pool. Even small response studies add crumbs to the knowledge table (forgive the mixed pool/table metaphors here – perhaps it’s a water table). Furthermore such ‘small response studies’ may encourage others to move the study design forward, replicate the work and validate it in that way. Replication research has, to some extent become unfashionable and may be due for a renaissance. Either way, educators and supervisors should not be discouraging fledgling research students by this obsession with sample size, they should, perhaps be focussing more upon what the study will do for the wider knowledge using community.