Random Sampling and the Small Sample Size
“[Name withheld] recalls the time that he felt like giving up on the validity of random sampling. He was conducting a survey among 3,000 respondents and randomly chose 10 completed surveys to examine, to give him some idea whether interviewing was properly being conducted. Of the 10, he found interviewer error on a particular question on eight documents. So, that morning he assigned his entire staff to go over every last questionnaire. Among the remaining 2,990 documents, his staff found two with interviewer error.”
Think Twice Before Making Assumptions About Data
I purposely deleted the name of the researcher who was ready to give up on random sampling because I am not going to have kind words to say or inferences to make about her or him. Despite the relatively small population size (3,000) being addressed, the random sample mentioned is extremely small to use for making inferences. Under conventional standards used in the industry, inferences from anything less than n=341 would be suspect for this population and I would hope we would not encourage clients to make inferences from such small samples sizes about things that matter. But I am not surprised that those rules were not applied by the research to his own business. Certainly the relatively large number (percentage – 80%) of errors discovered should give pause but not cause us to census unless there was no real cost to reviewing the entire population. One might assume that there was a business, i.e., ROI, reason for the initial sample of only 10. Perhaps a better strategy would be a second sample of 10, which given the probabilities involved would have yielded a very different answer approaching zero. The discrepancy between the two samples should have dictated at least a third sample of 10 and possibly even a fourth. At that point a clearer picture of the population would be seen and relatively stronger inferences about both the population and the first sample could be made to drive further decision making about the voracity of the interviewing method. Thus random sampling, with or without replacement would have pointed to the correct inference at a savings of more than 98% of the cost of checking the entire population.
While not intuitively pleasing, given the way we conventionally view our world, the relationship between sample size and population is not anywhere near linear. Just because you have a very large population does not mean you need a large sample. Most samples of the unsegmented population of the United States are limited to about 1,200 to obtain conventional margins-of-error at acceptable levels of confidence. The cost of improving precision and confidence are such that ROI precludes larger samples unless inferences must be made to sub-groups that are known to differ. Just because you have a very small population does not mean that a very small sample will be appropriate. There are mathematical rules that describe the relationships among sample size, effect size, precision, confidence, and power and they are not linear.
Although there is a literature about very small sample sizes and the conditions under which they can be used to make inferences, a general rule of thumb is that the Central Limit Theorem (CLT), which serves as the basis for much of inferential statistics, suggests minimum sample sizes of n=25 for the application of CLT. There is another set of statistical methods, exact tests, used for estimations with very small samples but they are not helpful in this situation.
– Dr. Dennis W. Gleiber, Chief Research Scientist