6.1.4. Non-random methods

Non-random sampling, is any other kind of sampling. Such methods are often used for speed and convenience, and also they do not require a sampling frame. Their big disadvantage is that sampling error cannot reliably be quantified, as the sampling properties of any estimators used are not known (since the probability of choosing any one individual or sample cannot be determined).

Convenience or accessibility sampling involves asking a sample of people to respond to a survey. An example is distributing survey questionnaires at a meeting of a local beekeeping association or at a beekeepers' convention. However these people may not be representative of the whole target population of beekeepers, for example due to local weather conditions in the first case, or the fact that attendees at a convention may be real enthusiasts whose bee husbandry practices are not typical of the general beekeeping population. A small convenience sample may be very useful for a pilot survey (see section 7.7) but is not recommended more generally.

An invitation to respond to a survey available on a web-site for example, is an example of taking a self-selected sample unless the people invited to respond to the survey have been selected already (as in Charrière and Neumann (2010)).

In some countries, such as Algeria, the most effective method in terms of response rates is a face-to-face survey in the beekeeper’s home or at meetings of beekeepers’ associations or co-operatives, as using mailed surveys produces an extremely low response. In Slovakia, it is also reported that the only method which works well is to disseminate questionnaires at meetings, as data collection via emails, web pages and journals has very low rates of return. For example, only 5 questionnaires were returned from a beekeeping journal with a circulation of 8 thousand copies (Chlebo, 2012; Pers. Comm.).

Given access to a population to be sampled, a survey organiser could try to take a “representative” sample, which is called judgemental or purposive sampling, to select what they think is a suitable mix of people to participate in the survey. The difficulty is that some important factors which have a bearing on the responses made to the questions may have been overlooked. Using judgemental sampling leads to a serious risk of badly biased samples.

Quota sampling is like stratified sampling in that stratification factors are identified which are thought to be relevant to the survey, but instead of sampling randomly the participants to come from each stratum, the survey samplers themselves choose the people subjectively from each stratum until sufficient people have been chosen and have responded. The main difficulty with this is the subjective choice of participants. Use of quota sampling also disguises non-response, as invited participants may decline to take part but the sampling will continue until the quotas are achieved. Quota sampling can work well, but can also fail spectacularly badly (as seen most notably in pre-election polling; see Schaeffer et al. (1990) for an overview of this and other methods). An example of using non-randomized quota sampling to survey American beekeepers is described in VanEngelsdorp et al. (2010), who recognised the dangers of using this approach but judged that it had given results consistent with the pattern of US beekeeping. Box 5 gives an example comparing quota and stratified random sampling.

A fundamental guiding principle in survey sampling is to use randomisation wherever possible in sample selection, to avoid subjective selection bias affecting survey results. Genuinely random samples are well-known to have the best chance of being representative of the survey population and should therefore be used unless it really is not possible (Schaeffer et al., 1990) or will lead to such a low response rate that the results are of little use.

Finally, it is essential in any reporting of survey results that the survey methodology and response rate should be clearly stated. This enables assessment of the reliability of the results, based on how representative the sample is likely to be. One way to assess whether or not the survey has been successful in achieving a representative sample is to check the responses to a standard question, to which the responses are not expected to change much from survey to survey, if past surveys have been carried out on the same population. If the results of this are different from what is expected this may indicate that the sample is not a representative one. The breakdown of the participants by key indicators such as geographical area or class of operation size can also be examined, although some of these factors will ideally have been controlled for in the sample design, by use of stratification.




Box 5.  Example: A case study: comparing quota and stratified random sampling.

In Scotland, the first survey conducted in 2006 for the Scottish Beekeepers' Association (SBA) used a form of quota sampling. This survey used strata which were broadly geographical, as has also been done subsequently. After the organisers decided on the split of the sample size between strata, they contacted the SBA Area Representatives,  in order for them to choose the required number of participants from those known to them personally and known by the Secretaries of the Local Associations of beekeepers in that area.  This allowed a known quota to be obtained from each geographical area. This was done purely because permission had not been gained at that stage to use the SBA membership records for sampling purposes and there was no other means of obtaining a list of beekeepers.  The results (Peterson et al., 2009) suggested to the organisers that the participants ran larger beekeeping operations than were typical of beekeepers in Scotland as a whole, and also that they were more conscientious and organised beekeepers than was typical. This is not entirely surprising, as the Area Representatives and Local Association Secretaries probably would have chosen people they thought were more organised and more likely to complete and return their questionnaire.

Subsequent surveys from 2008 onwards have used stratified random sampling. In the 2008 survey a modified Neyman allocation method (Schaeffer et al., 1990; Särndal et al., 1992; section 9.) was used to split the sample between the main SBA areas, and subdivided proportionally within these large areas to smaller geographical areas according to the number of SBA members (Gray et al., 2010). In 2010 the simpler proportional allocation was used, as there was insufficient data from the 2008 survey on which to base Neyman allocation and the 2006 data was felt to be out of date. In 2011, Neyman allocation was used again, based on winter loss rates.  The results were more in accord with what was expected, and therefore are probably more representative samples than the earlier one. The response rates however have been lower, and the higher response rate in the 2006 survey (of 77%, compared to 42% in the 2008 survey) almost certainly resulted from the element of personal contact.