|andrew.treloar.net||Andrew Treloar's personal website|
The four main error types to be avoided in survey work are sample selection bias, nonresponse error, item nonresponse error and response error.
This can occur either when a poor or nonrepresentative sample is selected from a target population or when the list chosen for the survey mailout is the wrong list. In the case of the email survey of the Psyche readership, the lists used were those read by the subscribers to Psyche (and used for notification of new issues) and were thus clearly the correct lists. No sampling was undertaken - the entire list was used. In the case of the print survey, the random selection of members of the societies was undertaken by the societies themselves (according to their procedures for mailouts). It was not possible to influence this selection process but it can be assumed to be truly random. Where possible (with the BPS) a sub-sample of the membership with an interest in the discipline area of Psyche was requested. This should ensure the best possible match between email and print survey respondents. As far as possible, therefore, causes of sample selection bias were eliminated in the survey design.
This relates to the bias inherent in the sub-population that responds to the survey. Selecting the right sample or list is of little use if a biased sub-sample is the only one that responds. Determining if the responses are from such a biased sub-sample is very difficult. The standard advice is to aim for a high response rate (75% or higher according to [Mangione, 1998]). Such a response ratio could not realistically be expected in the circumstances of the survey and neither the email or print surveys achieved anything near this. In the case of the email survey, a number of reminders were sent to try to improve compliance. These were not targeted reminders because of difficulties in getting access to the list addresses. In the case of the print survey, the societies were either not prepared to supply the addresses or would only supply addresses for a one-shot mailout. This meant that reminder letters were not an available option. One attempt to improve responses rates in the print survey was the inclusion of a small reward (the bookmark) in an attempt to induce some sense of obligation. Because of the low response rates it is possible that the responses to the surveys are biased, but eliminating this source of error was not achievable given the constraints inherent in the mailing lists used.
This occurs when respondents fail to answer individual questions, answer them incorrectly or add comments that do not fit the existing categories. Where this occurred, it was coded as Blank or Invalid (or sometimes Blank/Invalid) and is so shown in the tables and graphs. The percentage of invalid responses typically falls between 1 and 5. The percentage of Blank responses typically falls between 5 and 10, and can be interpreted as respondents not answering if they were unsure. Some causes of instructional ambiguity were picked up in the pilot stage and some others identified once the email survey was processed. These causes were remedied once detected.
This occurs when respondents misunderstand the wording of the questions as presented. The pilot phase of the survey instrument and a number of valuable comments from colleagues ensured a final survey that was as clear as possible while remaining short and focused.
Last modified: Monday, 18-Sep-2017 03:27:10 AEST
© Andrew Treloar, 2001. * http://andrew.treloar.net/ * email@example.com