Download Cognitive Interviewing and Pre-testing Methods in Survey Research and more Study notes Design in PDF only on Docsity! EUROPEAN COMMISSION EUROSTAT Directorate F: Social Statistics Unit F-4: Income and living conditions; Quality of life QUALITATIVE METHODOLOGIES FOR QUESTIONNAIRE ASSESSMENT THIS METHODOLOGICAL PAPER HAS BEEN PREPARED BY ISTAT 1 Development of a survey on Gender-based Violence Luxembourg, 2017 1 ISTAT is supporting the work on development of the methodology for a survey on gender-based violence through the GRANT 2 CONTENTS Introduction ....................................................................................................................................................... 3 Section A: Pre-testing the questionnaire ........................................................................................................... 4 Pre-testing stages ........................................................................................................................................... 4 Section B: Focus group ..................................................................................................................................... 7 Planning a Focus Group .............................................................................................................................. 10 Analysis of the data ..................................................................................................................................... 12 Section C: Experts review ............................................................................................................................... 14 Questionnaire Appraisal Systems ................................................................................................................ 15 Section D: cognitive interview ........................................................................................................................ 21 Introduction ................................................................................................................................................. 21 Cognitive interviewing techniques .............................................................................................................. 22 Planning a Cognitive interviewing project .................................................................................................. 25 Sampling and recruitment of respondents ................................................................................................... 26 Recruitment of respondents ......................................................................................................................... 27 Interviewers’ selection and training ............................................................................................................ 28 Interviewers’ training .................................................................................................................................. 29 Developing interview protocols................................................................................................................... 29 Some logistical aspects: ............................................................................................................................... 31 Conducting cognitive interviews ................................................................................................................. 33 Data Management, Analysis and Interpretation .......................................................................................... 35 Writing the Cognitive Test Report .............................................................................................................. 38 Conclusion ................................................................................................................................................... 40 Annexes ........................................................................................................................................................... 43 Appendix A: Examples of probes Appendix B: Recommendations for cognitive testing of the EU questionnaire on GBV against women and men Appendix C: Draft cognitive interviewing protocol for the EU questionnaire on GBV against women and men Appendix D: Recommended training program. Cognitive Test Training for testing the EU Questionnaire on GBV against women and men 5 2. Laboratory methods (pre-fields) • Pretesting of the questionnaire: can be conducted as an informal test that helps identify poor question wording or ordering, errors in questionnaire layout or instructions, and problems caused by the respondent's inability or unwillingness to answer the questions. Pretesting can also be used to suggest additional response categories that can be pre-coded on the questionnaire, and to provide a preliminary indication of the interview length and refusal problems. At this stage the goal is not to draw a representative sample of the target population, but to assure that the instrument is tested in various difficult conditions to stress its reliability. Interviewers for the pre-test should be particularly able to help in identifying problematic areas of the questionnaire or of the interview. It is recommended to provide interviewers with a detailed list of the aspects of the questionnaire to be tested and the feedback of the interviewer should be recorded and analysed. Debriefing sessions with interviewers can take place following a pre-test in order to get their input into the (re)design process. It is advised, also, that the researchers responsible for the survey conduct some interviews for testing by themselves the suitability of the instrument. Pretesting can be iterative until the final version of the questionnaire is generated. • Cognitive interviewing: Cognitive methods are used to examine respondents' thought processes as they answer the survey questions and to ascertain whether or not they understand what the questions mean and are able to provide accurate responses9. Cognitive methods are discussed in depth in Section D. 3. Field test • Behaviour coding 10: provides a systematic and objective means of examining the effectiveness of the questionnaire by analysing the interviewer-respondent interaction. Can be a field test or a laboratory test. It requires medium sample size (30 interviews are considered sufficient to detect problems), well trained coders and consistent use of the coding scheme. • Interviewers debriefing: discussion groups with interviewers with the aim at obtaining useful feedback on the performance of the questionnaire. Interviewers debriefing can be planned daily, soon after the data collection session, or after each interview. It is recommended to take notes during the meeting or audio-tape it. • Respondents debriefing: respondent interview, soon after the survey interview, to get useful feedback on the issues of the questionnaire. This kind of interviews are very useful used together with quantitative test such as behaviour coding, and quantitative data such as non-response rates. • Follow-up interviews: a second semi structured interview, following the initial interview and conducted by a different interviewer, with the aim of identifying issues with reference to the questionnaire content and questions. Respondent are led to remember how they interpreted and understood a subset of questions when answering to the first interview. It requires well trained interviewers. It is recommended to conduct the follow-up interviews shortly after the first interview and provide the interviewer with the answers already given by the respondent and a main topics outline. • Pilot survey: is a survey that reproduces all the survey phases on a small number of survey units or respondents. It is not important that sample units or respondents are representative of the overall population. Rather, it is important that they represent the various ways respondent will be contacted so they can reveal any difficulties that may emerge during the actual survey. While a pre-test focus more on the questionnaire alone, a pilot study deals with the entire survey. 4. Experimental tests • Split sample test (the alternative test): refer to controlled experimental testing of questionnaire variants or data collection modes to determine which one is "better" or to measure differences between them11. Split sample experiments may be conducted within a field or pilot test, or they may 9 Forsyth B.H. and Lessler J.T. (1991) “Cognitive Laboratory Methods: a Taxonomy”, in Measurement errors in surveys, Biemer et alt. (eds.), J. Wiley, N.Y. 10 Oksenberg, L., Cannell C., and Kalton, G. (1991). New Strategies for Pretesting Survey Questions. Journal of Official Statistics vol.7 (3): 349-394; Groves, R. M. et al. (2009). Survey methodology (2nd ed.). Hoboken, NJ: John Wiley & Sons. 11 Statistics Canada (1998): “Quality Guidelines”. Dissemination Division, Circulation Management, 120 Parkdale Avenue, Ottawa, Ontario, K1A 0T6 6 be embedded within production data collection for an ongoing periodic or recurring survey. When it is used as pre-testing method, it is necessary to prior define standard by which the different versions of questionnaire or questions can be judged. The sample size for each alternative should be designed to ensure that it has sufficient statistical power to detect real difference on the aspects under assessment. It is important to use a large sample and that the participants have been randomly assigned of the test versions. This type of pre-test is expensive, nevertheless it provide important information to compare survey questions, to assess the extent and direction of impact changes in question order, wording or layout. For pre-testing the questionnaire of the EU GBV against women and men survey, split sample test may be conducted using, for example, different sequence of the violence screening (partner and non- partner) or different wording of the questions about sexual violence or different options for counting the number of episodes occurred. Split sample test can also be used for testing different interview modes. 7 SECTION B: FOCUS GROUP Focus groups have a long history. Originally called focused interviews, had its origin in 1941 when the sociologist Robert Merton was invited by Paul Lazarsfeld to assist him in the evaluation of audience response to radio programs. In the time since Merton’s pioneering work12, focus group have become an important research tool for applied social scientists such as those who work in program evaluation, marketing, public policy, advertising, and communication. Focus groups may be useful at virtually any point in a research program, but they are particularly useful for exploratory research, where rather little is known about the phenomenon of interest, to acquire more knowledge about the topic of interest and to clarify the goals of the study. As a result, focus group tend to be used very early in a research project and are often followed by other types of research that provides more quantifiable data from larger groups of respondents. Focus groups also have been proven useful following the analysis of large-scale, quantitative surveys. In this latter use the focus group facilitates interpretation of quantitative results and adds depth to the responses obtained in the more structured survey. Among the more common uses of focus group are the following: 1. Obtaining general background information about a topic of interest; 2. Generating research hypotheses that can be submitted to further research and testing using more quantitative methods; 3. Stimulating new ideas and creative concepts; 4. Learning how respondents talk about the phenomenon of interest. This, in turn, may facilitate the design of questionnaires, survey instruments, or other research tools that might be employed in more quantitative research; 5. Pre-testing an available preliminary questionnaire 6. Interpreting previously obtained quantitative results. Box 1- Advantages and disadvantages of the Focus Group 13 Advantages Disadvantages · It is comparatively easier to drive or conduct · It is not based on a natural atmosphere · It allows to explore topics and to generate hypotheses · The researcher has less control over the data that are generated · It generates opportunity to collect data from the group interaction, which concentrates on the topic of the researcher’s interest · It is not possible to know if the interaction in group he/she contemplates or not the individual behaviour · It has high “face validity” (data) · The data analysis are more difficult to be done. The interaction of the group forms a social atmosphere and the comments should be interpreted inside of this context · It has low cost in relation to other methods · It demands interviewers carefully trained · It gives speed in the supply of the results (in terms of evidence of the meeting of the group) · It takes effort to assemble the groups · It allows the researcher to increase the size of the sample of the qualitative studies · The discussion should be conducted in an atmosphere that facilitates the dialogue 12 Merton, R. K., Kendall, P. (1946). The Focused Interview. The American Journal of Sociology, Vol. 51, No. 6 (May, 1946), pp. 541- 557. The University of Chicago Press 13 Based on Kruger, R. A. (1994). Focus groups: the practical guide goes applied research. Thousand Oaks: SAGE Publications, 2nd. Ed., 1994. Morgan, D. L. (1988). Focus group as qualitative research (2nd ed). A Sage University Paper 10 Structuring degree - self-managed groups (unstructured) - semistructured groups (with interview guide) - structured groups (with interview schedule) Number of moderators - 1 or 2: whenever FG is conducted by two moderators, one should have specific skills in the focus group process and the other should have expertise in the content area of the study or could have the role of non-participant or participant observer who takes notes and observes participants’ non-verbal communication Role of the moderator - Negligible (only provides the topic for the discussion and the interactions rules) - Restricted (facilitates the discussion flow, promotes interchange and modulates conflict) - Directive (with a well structured interview guide …) Length - From 60 to 120 minutes maximum Strategies to encourage participation - Incentives (gadget, an honorarium, transportation, or light refreshments) - Raising awareness strategy - Recruitment of participants interested to the topic Data collection strategies - Notes by a non participant or participant observer - Audio-recording - Video-recording with or without a one way mirror Strategies of analysis - Content analysis - Matrix based on the study objectives and on the questions asked Variants of focus groups include: • Two-way focus group: one focus group watches another focus group and discusses the observed interactions and conclusion • Dual moderator focus group: one moderator ensures the session progresses smoothly, while another ensures that all the topics are covered • Duelling moderator focus group (fencing-moderator): two moderators deliberately take opposite sides on the issue under discussion • Respondent moderator focus group: one and only one of the respondents is asked to act as the moderator temporarily • Client participant focus groups: one or more client representatives participate in the discussion, either covertly or overtly • Mini focus groups: groups are composed of four or five members rather than 6 to 12 • Teleconference focus groups: telephone network is used • Online focus groups - computers connected via the internet are used Planning a Focus Group This paragraph presents, in more detail, characteristics to be considered in the planning phase of the Focus Group. Numbers of FG: a minimum of 4 to 5. The marketing researchers, for example, vary the number of groups as a function of which meetings are or are not producing new ideas. If the moderator clearly can predict what will be said in the next group, then the research is concluded. This usually happens after the third or fourth session. More in general, FG can be iterative until the researchers reach their objectives, for example, until the final version of the questionnaire is generated. Numbers of participants: 8 to 12. Smaller groups may be dominated by one or two members, while larger groups are difficult to manage and inhibit participation by all members of the group. 11 Sample selection: who will participate in the study depends on the purpose of the research. People are usually segmented by demographic factors such as geographical location, age, size of the family, status, gender, etc. The choice between homogeneous or heterogeneous groups is somehow a function of the need to maintain a reasonable homogeneity inside the group in order to encourage discussion. The most general advice on that is each participant should have something to say on the topic and to feel comfortable speaking with the others even if this does not mean that participants should have the same perspective on the topic. Length of FG: one to two hours Number of moderators: 1 or 2. Moderator role and characteristics: the moderator is the key to assuring that a group discussion goes smoothly. The focus group moderator should be well trained in group dynamics and interview skills. Depending on the intent of the research the moderator may be more or less directive with respect to the discussion. It is important to recognize that the amount of direction provided by the interviewer does influence the types and quality of the data obtained from the group. “More structured groups answer to the researcher’s questions; less structured groups help to reveal the perspective of the group participants”16 Facilities: although FG can be conducted in a variety of sites ranging from home to offices and even by conference telephone, it is most common for focus group session to be held in facilities designed especially for focus group interviewing. Such facilities provide one- way mirrors and viewing rooms where observers unobtrusively may observe the interview in progress. Focus group facilities include equipment for audio or video taping of the interview and perhaps a small transmitter for the moderator to wear (a “bug in the ear”) so that observers may have input into the interviews. Developing the Interview: essential to the FG is a careful development of an interview guide reflecting the research questions. It is important to begin with broad, open-ended questions and with low emotional intensity issues and then move to high emotional intensity issues. It is also important to have probes ready to prompt the focus group participants for further explanation or depth on the topics. The guide can represent just a map or can be a semistructured guide or a structured guide, it depends on the purposes of the study and on the researchers objectives. Box 3 - Steps in the design and use of focus group 17 Preparing for the Session 1. Identify the major objective of the meeting. 2. Plan the session (see below). 3. Call potential members to invite them to the meeting. Send them a follow-up invitation with a proposed agenda, session time and list of questions the group will discuss. 4. Plan to provide a copy of the report from the session to each member and let them know you will do this. 5. About three days before the session, call each member to remind them to attend. Developing Questions 1. Carefully develop five to six questions - Session should last about 1.5 hours -- in this time, one can ask at most five or six questions. 2. Always first ask yourself what problem or need will be addressed by the information gathered during the session, e.g., examine if a new service or idea will work, further understand how a program is failing, etc. 3. Focus groups are basically multiple interviews. Therefore, many of the same guidelines for conducting focus groups are similar to conducting interviews. Planning the Session 1. Scheduling - Plan meetings to be one to 1.5 hours long 2. Setting and Refreshments - Hold sessions in a conference room, or other setting with adequate air flow and lighting. Configure chairs so that all members can see each other. Provide name tags for members, as well. Provide refreshments. 3. Ground Rules - It's critical that all members participate as much as possible, yet the session move along 16 Morgan, D. L. (1988). Focus group as qualitative research (2nd ed). A Sage University Paper 17Carter McNamara, MBA, PhD, Authenticity Consulting, LLC. 12 while generating useful information. Because the session is often a one-time occurrence, it's useful to have a few, short ground rules that sustain participation, yet do so with focus. Consider the following three ground rules: a) keep focused, b) maintain momentum and c) get closure on questions. 4. Agenda - Consider the following agenda: welcome, review of agenda, review of goal of the meeting, review of ground rules, introductions, questions and answers, wrap up. 5. Membership - Focus groups are usually conducted with 6-12 members who can have some similar nature, e.g., similar age group, status in a program, etc. Select members who are likely to be participative and reflective. Attempt to select members who don't know each other. 6. Plan to record the session with either an audio or audio-video recorder. Don't count on your memory. If this isn't practical, involve a co-facilitator who is there to take notes. Facilitating the Session 1. Major goal of facilitation is collecting useful information to meet goal of meeting. 2. Introduce yourself and the co-facilitator, if used. 3. Explain the means to record the session. 4. Carry out the agenda - (See "agenda" above). 5. Carefully word each question before that question is addressed by the group. Allow the group a few minutes for each member to carefully record their answers. Then, facilitate discussion around the answers to each question, one at a time. 6. After each question is answered, carefully reflect back a summary of what you heard (the note taker may do this). 7. Ensure even participation. If one or two people are dominating the meeting, then call on others. Consider using a round- table approach, including going in one direction around the table, giving each person a minute to answer the question. If the domination persists, note it to the group and ask for ideas about how the participation can be increased. 8. Closing the session - Tell members that they will receive a copy of the report generated from their answers, thank them for coming, and adjourn the meeting. Immediately After Session 1. Make any notes on your written notes, e.g., to clarify any scratching, ensure pages are numbered, fill out any notes that don't make senses, 2. Write down any observations made during the session. For example, where did the session occur and when, what was the nature of participation in the group? Were there any surprises during the session? Did the tape recorder break? Analysis of the data 18 Analysis Baseline: different analysis can be carried out using the collected information - from simple narrative descriptions based on the notes taken during focus group discussion, to more complex analysis of record transcripts. The type of analysis depends, in addition to the time and resources available, to the research goals and how the results are going to be used. For example, if the goal of a focus group is to get practical suggestions or ideas to be deepen later, a minimum level of analysis can be sufficient such as to compare impressions among participants; if, however, important decisions will be made on the basis of the results, a more rigorous analysis is needed. In this latter case before embarking on the analysis all the focus group need to be transcribed verbatim. Analysis steps: in order to analysis the data collected during the focus group, it is necessary to: a) to build a reading/analysis grid to be applied to the entire material (grid construction is facilitated by a semi-structured focus group); b) identify the range of opinions that emerged. You can also apply content analysis techniques to the full transcript of the discussion. Computer assisted content analysis techniques are now available and can facilitate the task. Strategies of analysis: 18 Corrao, S. (2000). Il focus group, Franco Angeli, Milano. 15 record other comments on the item that do not fall under existing coding categories. The time needed by the expert to code the whole questionnaire varies depending on the complexity of the questionnaire itself. There are several other checklists available generally called questionnaire appraisal system. When expert reviews are organized as group sessions, the discussion is chaired by a moderator and for each question the debate may either follow a structured pattern using a standardized coding scheme, to be filled in before the meeting, or adopt a less structured, free-form approach without a standardized coding list (Eurostat, 2006). However, even in the latter case when reviewers do not fill in any checklists, the moderator is asked to steer the discussion towards topics like wording, terms, skip instructions, instructions, layout, comprehension etc. and after the discussion, a report on results should be written. It is recommended that the questionnaire designer also participate in the group discussion but with a receptive mindset, avoiding any aggressive or defensive behaviour; it is common practice to audio or video record the discussion in order to be able to double-check the minutes taken by the appointed note-taker. A detailed report on discussion results must follow. The draft questionnaire and other information must be sent to the expert allowing for adequate time for him/her to respond (if consulted independently) or prepare for the expert panel. In particular, the questionnaire must be accompanied by a short note that clearly states the key aims and objectives of the survey, draws attention to the questionnaire design problems and issues on which advice is sought. Other relevant information includes any immovable constraints regarding the scope and design of the questionnaire (e.g., mode of administration, length, questions inserted for comparability with other surveys, etc.) (Campanelli P., 2008; Blake M. 2015) and the coding scheme if relevant. The expert reviews area quick pre-testing method, cost and time effective, that aims to correct or remove obvious problems in the questionnaire without the respondent’s involvement. Nevertheless, the lack of respondent input can be tricky because respondents’ backgrounds and life circumstances vary a great deal, and it is difficult for experts to anticipate and understand all of these variables without information on how respondents will interpret and respond to the questions. Experts may differ regarding the number and types of problems they identify in the survey questions as well as in their change recommendations; problems identified may not occur in the actual survey or not in a way that would compromise survey data (Blake M., 2015).The use of a coding scheme is recommended but it requires experts in questionnaire design who know how and when to apply a particular code. Sometimes less experienced reviewers will identify issues in the questionnaire that may not have very serious consequences for data quality, but expert reviewers are not exempt from this risk either (BiemerP. & Lyberg L., 2003).Furthermore, when a group session with experts is planned, finding a meeting time that suits everybody is often challenging, and even though individual feedbacks can always be collected separately, this means forfeiting the added value of the discussion. Questionnaire Appraisal Systems Back in 1992, on the basis of question-and-answer process and informed through cognitive interviewing results, Lessler, Forsythand Hubbard (1992) developed a coding scheme for expert appraisal (Questionnaire Appraisal Coding System). This was very exhaustive19 in indicating specific problem within a question; nevertheless the scheme does not indicate what the cause of the problem and how the question should be improved. It was also found by practitioners too detailed and extensive for effective and efficient use in assessing question errors (especially for long questionnaires used in social surveys). Alternatives coding schemes have been proposed. Willis and Lesser (1999) proposed another shorter coding scheme “Question Appraisal System (QAS)”, based on several previous questions appraisal systems and, in part, from a method developed to examine and classify the cognitive processes inherent in the question-answering process (Lessler and Forsyth, 1996).The expert is expected to examine each question by considering specific categories of question characteristics step by step, at each step, to decide whether the question exhibits features that are likely to cause problems. In completing the appraisal, the expert is expected not only to indicate whether a problem occurs but also the 19See Forsyth, Lessler and Hubbard (1992) for details. 16 reason if it occurs. The expert in examining the question should follow some steps and indicate type of problems, as presented in below table: The Question Appraisal System (QAS-2009) STEP 1 READING: Determine whether it is difficult for the interviewers to read the question uniformly to all respondents 1a. WHAT TO READ: Interviewer may have difficulty determining what parts of the question should be read. Yes No 1b. MISSING INFORMATION: Information the interviewer needs to administer the question is not contained in the question. Yes No 1c. HOW TO READ: Question is not fully scripted and therefore difficult to read. Yes No STEP 2 INSTRUCTIONS: Look for problems with any introductions, instructions, or explanations from the respondent’s point of view. 2a. CONFLICTING OR INACCURATE INSTRUCTIONS, introductions, or explanations Yes No 2b. COMPLICATED INSTRUCTIONS, introductions, or explanations. Yes No STEP 3 CLARITY: Identify problems related to communicating the intent or meaning of the question to the respondent. 3a. WORDING: Question is lengthy, awkward, ungrammatical, or contains complicated syntax. Yes No 3b. TECHNICAL TERM(S) are undefined, unclear, or complex. Yes No 3c. VAGUE: There are multiple ways to interpret the question or to decide what is to be included or excluded. Yes No 3d. REFERENCE PERIODS are missing, not well specified, or in conflict. Yes No STEP 4 ASSUMPTIONS: Determine whether there are problems with assumptions made or the underlying logic. 4a. INAPPROPRIATE ASSUMPTIONS are made about the respondent or about his/her living situation. Yes No 4b. ASSUMES CONSTANT BEHAVIOR or experience for situations that vary. Yes No 4c. DOUBLE-BARRELED: Contains more than one implicit question. Yes No STEP 5 KNOWLEDGE/MEMORY: Check whether respondents are likely to not know or have trouble remembering information. 5a. KNOWLEDGE may not exist: Respondent is unlikely to know the answer to a factual question. Yes No 5b. ATTITUDE may not exist: Respondent is unlikely to have formed the attitude being asked about. Yes No 17 5c. RECALL failure: Respondent may not remember the information asked for. Yes No 5d. COMPUTATION problem: The question requires a difficult mental calculation. Yes No STEP 6 SENSITIVITY/BIAS: Assess questions for sensitive nature or wording, and for bias. 6a. SENSITIVE CONTENT (general): The question asks about a topic that is embarrassing, very private, or that involves illegal behavior. Yes No 6b. SENSITIVE WORDING (specific): Given that the general topic is sensitive, the wording should be improved to minimize sensitivity. Yes No 6c. SOCIALLY ACCEPTABLE response is implied by the question. Yes No STEP 7 RESPONSE CATEGORIES: Assess the adequacy of the range of responses to be recorded. 7a. OPEN-ENDED QUESTION that is inappropriate or difficult. Yes No 7b. MISMATCH between question and response categories. Yes No 7c. TECHNICAL TERM(S) are undefined, unclear, or complex. Yes No 7d. VAGUE response categories are subject to multiple interpretations. Yes No 7e. OVERLAPPING response categories. Yes No 7f. MISSING eligible responses in response categories. Yes No 7g. ILLOGICAL ORDER of response categories. Yes No STEP 8 OTHER PROBLEMS: Look for problems not identified in Steps 1–7. 8. Other problems not previously identified. Yes No For more information see Questionnaire Appraisal System QAS-99 Research Triangle Institute To conclude, using the QAS, experts are expected to give well-structured feedback which allows to code the problems and find solutions. When assessing the questions, experts are asked to think that respondents vary by age and different levels of education or income and life experience and that may affect the understanding the question. For example, if the question focuses on particular health problem, respondents having and not having the problem should be able to give the response (Willis G. & Lesser J., 1999). Another coding scheme is the one developed by staff at Statistics Netherlands Condensed Expert Questionnaire Appraisal Coding System (Snijkers G., 2002): Example: Problems in questionnaire with regard to: Comprehension of question Information processing Reporting 20 Lessler, B.H. Forsyth, J. (1996). A Coding System for Appraising Questionnaires. In Schwarz N., Sudman S. (eds), Answering questions. Methodology for determining cognitive and communicative processes in survey research, San Francisco (California): Jossey-Bass Lessler, J., Forsyth, B., and Hubbard, M. (1992),Cognitive Evaluation of the Questionnaire. In C. Turner, J. Lessler, and J. Gfroerer (Eds.), Survey Measurement of Drug Use: Methodological Studies, National Institute on Drug Use, Rockville, MD. pp. 13–52 Olson, K. (2010), An Examination of Questionnaire Evaluation by Expert Reviewers. Field Methods 22:4 (2010), pp. 295–318; doi: 10.1177/1525822X10379795 Presser, S. & J. Blair (1994). Survey pretesting: Do different methods produce different results? Sociological Methodology24:73-104). Rothgeb J., Willis G., Forsyth B. (2001). Questionnaire pre-testing methods: Do different techniques and different organizations produce similar results? Proceedings of the Section on Survey Research Method, American Statistical Association. Snijkers, G. (2002). Cognitive laboratory experiences on pre-testing computerized questionnaires and data quality. Heerlen: Statistics Netherlands. Willis,G.(2005) Cognitive Interviewing, Sage Publications, Inc. Willis G., Lessler J. (1999). QAS: Questionnaire Appraisal System. Rockville. MD: Research Triangle Institute. Willis G., Schechter S., and Whitaker K. (1999). A comparison of cognitive interviewing, expert review, and behavior coding: What do they tell us? Paper presented at the Joint Statistical Meetings, Baltimore, MD. Proceedings of Survey Research Methods Section of the American Statistical Association (28-37). Washington, DC: American Statistical Association. 21 SECTION D: COGNITIVE INTERVIEW Introduction Among the pre-testing techniques aimed at improving the survey instrument and consequently data quality, cognitive interviewing has emerged as one of the prominent pre-testing methods for identifying and correcting problems in survey questions. Indeed, more than any other question evaluation method, cognitive interviewing offers a detailed depiction of meanings and processes used by respondents to answer questions, providing a wide range of information to the researcher about sources of response error. In particular, it can identify sources of response error that the interviewer and the survey practitioner usually fail to see, which have to be eliminated/reduced to ensure accuracy of survey instruments and high quality of data collected. By cognitive interviewing it is possible to determine whether the wording of a specific question fully conveys its meaning and to quickly identify problems such as redundancy, missing skip instructions and awkward wording. The origins of cognitive interviewing are rooted at the intersection of survey methodology and cognitive psychology; in particular the interest for cognitive testing originated from an interdisciplinary seminar on Cognitive Aspects of Survey Methodology (CASM), when cognitive psychologists and survey researchers met to study the cognitive factors that may influence survey responses and the ways in which survey questions should be designed to take such factors into account. The CASM movement not only brought attention to the issue of measurement error, but it also established the idea that individual processes, especially respondents’ thought processes, must be understood to assess the validity and potential sources of error (Schwarz N., 2007). Cognitive interviewing relies on the cognitive model introduced by Tourangeau (1984). This model explains that the “question answering process” is divided into four stages: comprehension of the question, retrieval from memory of relevant information, judgement/estimation process and response process. As the cognitive process leading to question answering is a universal process that occurs in the mind of each and every respondent, each respondent, regardless of demographic or personal background, will go through those four steps to formulate his/her answer. The CASM approach relies on the assumption that, when a survey question fails, this is due to problems occurring at one (or more) of these stages that cause the incorrect answers. Thus, the main purpose of a cognitive test is to identify potential problems that may occur at each stage of the answering process; and actually listing the goals of the cognitive test for each stage, as in the following table, may also prove useful: Cognitive stage Definition Aims of Cognitive test / Possible problems to be detected 1. Comprehension Respondents understand and interpret the question To test whether respondents understand the question as intended by the researcher who designed it (what does the respondent believe the question is asking and what do specific words and phrases in the question mean) Unknown terms, ambiguous concepts, long and complex questions. 2. Retrieval Respondents search memory for relevant information to answer To establish whether the information needed can actually be recalled by the respondents (recall ability of information and what type of information does the respondent need to recall), and to check the strategy used by respondents to retrieve information. Recall difficulty, no prior knowledge or experience, perceived irrelevance of topic 22 3. Judgement/ Estimation Respondents evaluate and/ or estimate while deciding on an answer To evaluate how much mental effort is dedicated by respondents to answering the question and to assess their degree of interest in the subject of the survey. To check whether the question wording could lead to more “socially desirable” answers. To identify questions that could cause embarrassment or defensive behaviors. Biased or sensitive question, estimation difficulty, impact of social deliverability on judgement. 4. Responses process Respondents provide information in the requested format To test whether respondents correctly interpret the answer categories provided, and whether such categories match those spontaneously generated by respondents. In particular, the presence of incomplete and/ or ambiguous response options needs to be verified. Incomplete response options, response options don’t fit with understanding or judgement of question, response influenced by social desirability, unwilling to answer. To recap, the cognitive interview lead to improve data collection instrument by providing evidence on whether respondents understand questions in the intended way, are able and willing to provide the requested information and an respond using the answer options provided. The Cognitive interviewing has also emerged as a key qualitative method for the pre-testing and evaluation of survey questionnaire to be used in in cross-national researches. It is recognized that individual life circumstances and personal perspectives (clearly influenced by socio-economic status, level of education, age, etc.) but also social-cultural factors country specific determine how respondents interpret and elaborate survey questions. How a respondent goes about the four stages of the question-answer process (comprehension, recall, judgment and response) is informed by the socio-cultural context in which he/she lives. Through a comparative analysis of cognitive interviews it is possible to identify patterns of interpretation and patterns of error across groups and countries due to these types of socio-cultural differences. “The ultimate goal of cognitive interviewing studies is to better understand question performance. Again, this includes not only identifying respondent difficulties, but also identifying the interpretative value of a question and the way in which that question may or may not be consistent with its intent – across particular groups and in different contexts” (Miller K., 2014 page 4). Furthermore, the cognitive interview is used in cross-national researcher because it is able to not only to uncover unexpected and undesired patterns of interpretation but also can “uncover translation issues that might go unnoticed in other translation assessment processes such as expert reviews” (Schoua-Glusberg A. & Villar A., 2014, page 66). In brief, the cognitive interviews is useful as pre-testing method for a cross- national survey questionnaire because it can reveal translation problems, sources questions error and problems related to cultural differences. Cognitive interviewing techniques Beatty (2004, page 45) defines “cognitive interviewing is the practice of administering a survey questionnaire while collecting additional verbal information about the survey responses; this additional information is used to evaluate the quality of the response or to help determine whether the question is generating the sort of information that its author intends”. This definition describes the most common application of this pre-testing technique: in-depth interviews with a small sample of respondents in which respondents first answer a draft survey question and then provide textual information on the cognitive process behind their answer. In particular, respondents are asked to describe how they came upon their 25 It is generally advised to develop a cognitive interviewing technique that combines think-aloud with verbal probing considering the aims of the cognitive test as well as the ability of the respondent to think aloud. A hybrid between the two approaches is a practical solution that seeks to maximize the autonomous flow of thought of respondents on questions without hesitating to probe when it is deemed appropriate to collect the information needed (e.g. DeMaio T., Rothgeb B., & Hess J., 1998; Beatty P.,2004; Willis G., 2005; Beatty P. & Willis G.,2007; d’Ardenne J., 2015). The Cognitive interviewing has also emerged as a key qualitative method for the pre-testing and evaluation of survey questionnaire to be used in in cross-national researches. It is recognized that individual life circumstances and personal perspectives (clearly influenced by socio-economic status, level of education, age, etc.) but also social-cultural factors country specific determine how respondents interpret and elaborate survey questions. How a respondent goes about the four stages of the question-answer process (comprehension, recall, judgment and response) is informed by the socio-cultural context in which he/she lives. Through a comparative analysis of cognitive interviews it is possible to identify patterns of interpretation and patterns of error across groups and countries due to these types of socio-cultural differences. “The ultimate goal of cognitive interviewing studies is to better understand question performance. Again, this includes not only identifying respondent difficulties, but also identifying the interpretative value of a question and the way in which that question may or may not be consistent with its intent – across particular groups and in different contexts” (Miller K., 2014 page 4). Furthermore, the cognitive interview is used in cross-national researcher because it is able to not only to uncover unexpected and undesired patterns of interpretation but also can “uncover translation issues that might go unnoticed in other translation assessment processes such as expert reviews”. (Schoua-Glusberg A. & Villar A., 2014, page 66). To recap, the cognitive interviews is useful as pre-testing method for a cross-national survey questionnaire because it can reveal translation problems, sources questions error and problems related to cultural differences. Planning a Cognitive interviewing project In order to test a questionnaire or few questions by cognitive interviewing, several factors should be taken into account, such as the level of development of the questionnaire/questions, the time and financial resources to be had, the possibility to rely on expert or non-expert cognitive interviewers and how many are available, the type of persons to be included as sample of survey target population, the use of other pre-test methods and possibility to conduct more than one round of cognitive interviews, where the cognitive interviews will be conducted, what documentation must be prepared, and some ethical issues. In other words, it is necessary to design and plan the cognitive test in all its methodological and logistical aspects related to: Sampling and recruitment of respondents Development of the interview protocol Selection and training of the interviewers Conducting of cognitive interviews Data management, Analysis and Interpretation Writing the Cognitive Test Report. How long it will take to implement a cognitive test project depends on a number of factors, including: ease of recruitment of the respondents, anticipated number of rounds of testing, number of interviews to be achieved in one or more rounds, number of interviewers and, of course, number of questions being tested. Nevertheless, the main factor is the actual available time, because the deadline by which the final version of the tested questionnaire must be available is the main time constraint. 26 Sampling and recruitment of respondents Regarding the sample, two main issues need to be addressed: sample characteristics and sample size. Cognitive Testing is a qualitative method where in-depth interviews are conducted paying explicit attention to the mental processes respondents use to answer survey questions. The interviewer is not interested in the answer itself but in the process that leads the respondent to give the answer; therefore the sample of the respondents will be selected using a non-probability sampling technique of the purposive type. It is important to clearly define the sample selection criteria to ensure that it includes those particular types of people who “represent” the characteristics of interest as relates to the research objectives. Indeed, the characteristics or criteria chosen to purposively select on are important for two reasons: a) they ensure that all key characteristics of relevance to the way the questions could be interpreted and answered by survey’s target population are covered; b) they ensure diversity within the sample so that differences in interpretation can be explored (Collins D. & Gray M., 2015). The participants to be included in the sample have to reflect the target population for which the questions are designed as well as those individuals, in this population, who will most likely experience or process questions in a variety of ways. Therefore, it is important to select respondents with the most diverse socio- demographic characteristics and living in different territorial areas to increase the variability of contexts and thus the problems that may arise (Miller K., 2002). Furthermore, respondents should be chosen according to various skip patterns and topics in the questionnaire. Only with this type of sample, will it be possible to identify the characteristics of the questions that could cause problems for respondents (Beatty P. & Willis G., 2007). Recommended characteristics of the respondents and sample size depend on the complexity of the subject being studied, and therefore of the questions, the degree of progress of the testing tools as well as the ultimate goal of the study. There is not a general agreement on the ideal number of interviews. Beatty and Willis (2007) report that the current practice is based on the assumption that a small sample (around 15 individuals) with relevant study features is sufficient to highlight the most critical issues. What is the right number of participants in cognitive interviewing “may be hard to specify beforehand as some problems may be easily identified, while others are harder to root out, perhaps because they are relatively subtle or affect only respondents with certain characteristics or experiences. Similarly, some response tasks may pose difficulties for most respondents, while others affect only a small proportion of respondents, but perhaps cause serious measurement error when they occur” (Blair J. et al., 2006). As the sample size increases, so does the likelihood of observing a given problem in a set of cognitive interviews or detecting more and diversified problems (Blair J. & Conrad F., 2011). The size of a purposive sample for cognitive test tends to be, anyway, small (10-20) because it is also influenced by time, budget and other resource constraints and if other pre- testing methods are anticipated. “The ultimate number of interviews is based not on a particular numerical goal, but on the ability to construct a theory that explains why respondents answer questions the way they do and, in the end, the construct (or set of constructs) that a particular question captures” (Willson S. & Miller K., 2014, page 19). Furthermore, as cognitive interviewing is expected to continue until no new problems or patters of interpretation are discovered, this may require more than one round of testing. According to Willis (Willis G., 2005) the number of interviews recommended per testing round depends on the seriousness of the problems identified: if after some interviews it is clear that there are major problems to be rectified, then it is better to apply the modifications required and test the changes. Cognitive testing is seen as an iterative process and Willis (Willis G., 2005) also suggests planning at least two rounds since “the nature of the CT rounds tends to change over the course of the development of a questionnaire. Early in the process, findings relate not only to particular question wordings but to more global issues, such as the appropriateness of the survey measurement of major concepts that is attempting to cover, while later rounds focus mainly on polishing specific question wordings”. 27 Recruitment of respondents 20 The recruitment process is often the most time-consuming part of the cognitive interviewing project. It is important to address two main issues: who is responsible for the recruitment and which method will be used. Based on time and resources available and other factors such as sample size and individual characteristics, a decision on who will be in charge of the recruitment process needs to be made. Some institutions have a specific Unit or Center in charge of all cognitive tests and its members are also responsible for recruiting potential respondents; others rely on the researchers or the cognitive interviewers to find the participants; still others tend to use ad-hoc external agencies. There are a number of different methods to recruit people that can be used alone or in combination, based on the types and number of participants and the resources available. The most common methods are: - Direct recruitment approaches prospective participants face-to-face, by telephone, SMS or email. It is possible to recruit people directly by going door to door or setting up stands in strategic locations where the types of persons sought to be interviewed are easily found. When approaching the prospective participants, share some basic information about what the study is about, why it is carried out, what is requested of the participant. If they are interested, few questions should be asked to establish their eligibility. - Snowballing or chain sampling uses participants who have already been recruited and/or interviewed. These may help to identify other people they know who might fit the sampling criteria. This technique is useful when a quite specific, or even hidden, subset of population must be engaged. A research project information leaflet needs to be prepared for distribution to the additional participants, with contact details. Generally, this method is supplementary to the main recruitment method. - Using advertisements includes putting up posters or leaflets on notice boards in places where prospective interviewees can be found or in other places where they may catch people’s eyes; distributing leaflets or flyers; advertising in newspapers and newsletters. Advertisements can also be virtual, for example on websites, online forums or social media. This method is effective for recruiting very specific groups. It is important to keep in mind, however, that some people belonging to the same target group may not be reached by the particular media selected, therefore it is better to combine this with other recruitment methods. - Recruiting via organizations or groups is helpful to recruit a specific target of people that are likely to be members or users of, or affiliated with particular groups, organizations or services. This may save time in the recruitment process but the individuals selected in this way may not reflect the variety of persons of this group. - Pre-existing sampling frames can also be used. This may include previous research participants who have given consent to be re-contacted for future studies or people who sit on a research recruitment panel, and it can be a very efficient recruitment method because you potentially already know at least some information about the people you are going to approach. One factor than can influence the time needed to recruit participants and facilitate the availability of participants is financial incentive. Indeed, it is common practice, if the budget allows, providing a small financial incentive (cash, voucher…) as a token of appreciation and acknowledgement of the effort and time involved in taking part in the cognitive interview. Participants are requested to sign a receipt at the time of the cognitive interview. A reimbursement for travel or other expenses should also be considered, if appropriate. Depending on the type of recruitment method used, or their combination, the following documents need to be prepared: - Recruitment instructions - Quota sheet for including the individual selected 20 This section is based on Collins D. and Gray M. “Sampling and Recruitment” in Collins D. (Ed) Cognitive Interviewing practice, 2015 SAGE. More details can be found in this chapter. 30 - how test questions should be administered - which cognitive interviewing techniques should be used and how. The protocol is closely linked to type of approach chosen for the cognitive interview, which may vary along a continuum ranging from “think aloud” techniques to a highly standardized “verbal probing”, and thus it can be adapted accordingly. Nevertheless, irrespective of the type of approach, the protocol usually includes a standardized introduction to responders to explain the test objectives and its development. If the “think aloud” approach is applied, the interview protocol may simply consist of the list of questions to be tested, along with the recommendation to the interviewer to keep reminding the respondents to express their thinking processes and thoughts aloud when answering questions. As mentioned before, the “think aloud” method can be concurrent (while the R. is providing the answer) or retrospective (elicited by the interviewer after the answer is provided using precise pre-scripted probes or spontaneous probes), in any case this must specified. The interview protocol should also instruct the interviewers on how to introduce the think aloud approach. This is usually done in three steps: - Explain what it is about: “Please say “out loud” anything that comes into your head whilst you are answering…” or “When you answer the questions I would like you to tell me what you are thinking…” or “Please tell me what is going through your mind as you are considering each of your answers…” - Give a practical demonstration with a simple example: "Try to visualize the place where you live, and think about how many windows there are. As you count up the windows, tell me what you are seeing and thinking about." (Willis G., 1994) - Let the respondent practice the think aloud technique before the interview takes place. Not all respondents are at ease with this approach, so the interview needs to offer positive feedback and praise to encourage reticent participants to continue talking during the practice. Keeping in mind that not all respondents feel comfortable in thinking aloud and may therefore not provide sufficient information, or may not verbalize all of their thinking processes, some probes can help elicit more information. It is up to the interviewer to decide during the interview which types of probes to use. The “verbal probing” approach can be more or less standardized depending on the type of probes. When highly standardized, the probes are pre-scripted, meaning that they have already been decided by the researcher and written in the protocol with the exact wording to be used and in the exact position where they have to be asked. In this case, instructions to the interviewer are to strictly follow the testing protocol to obtain a structured interview. One advantage of using pre-scripted probes is that all the testing aims are addressed and for every participant the same areas are explored consistently. Nevertheless, it is quite common that especially very experienced interviewers rely also or exclusively on spontaneous probes. In this case, the interviewers are allowed to generate their own spontaneous probes to explore any issues raised during the interview (even if these have not been anticipated in the protocol), to clarify what the respondents said, or to investigate inconsistencies in their answers. To facilitate the interviewer’s job, a list of suggested probes can also be added to the protocol. Generally, in the cognitive interview protocol the probing questions are highlighted with different colors, typeface or placed in a box labeled “probe”. The definition and formulation of the probing questions is a very sensitive phase that can greatly enhance the effectiveness of the cognitive test in capturing the point of view of the respondents. Therefore, it is first necessary to conduct an in-depth evaluation of the questions to be tested in order to identify any problems that can be addressed during the test, and which the probes can help shed light on. Probes should be phrased in such a way as to be: - Neutral. Biased probes can increase the likelihood of problems going undetected - Balanced. Any states inferred should be balanced with reference to an opposite state - Open. To encourage the respondent to talk, the probes should not be answerable with a single word (i.e. good probes start with Why… How…When….) 31 - Simple. Probes should not be long-winded or double-barreled or with multiple clauses as these are likely to be misunderstood (d’Ardenne, 2015). Furthermore, the probes should be limited in number and relevant to the purpose of the testing. Unnecessary probes will put too much strain on the respondent who may get tired quickly jeopardizing the accuracy of the information provided during the entire interview. While it is important to cover all the objectives of the testing through the probes, a good balance should be found between accuracy of the information collected and length of the interview. (See Appendix A) Depending on the stage of the questionnaire development and the objectives of the testing, the same cognitive interview can make use of both approaches for the entire questionnaire or for some of the questions or for some sections. The thinking aloud should not be influenced by the interviewer in any way, therefore it is important that the probes be administered only after the participant has completed his/her thinking aloud. As stressed by d’Ardenne (2015), once the cognitive interview protocol has been developed, it is important to test it. Mock interviews with colleagues or other people will provide useful information on how well the protocol has been designed, preempt any unexpected issues, show if the probes are appropriate, well phrased and work well, and pick up any missing or unclear instructions. The length of the interview must be checked too, and should the protocol be deemed too long, there are two options: - drop some of the materials from the protocol such as some questions may be not tested or some probes may be unnecessary. This will require revisiting the testing priorities and the testing aims; - develop two (or more) versions of the protocols and test different versions on different participants. Some logistical aspects: Location: The interview may take place in a cognitive laboratory, if available at the institute, or in any quiet and private room, such as for example a conference room or an empty office. It may even be carried out at the participant’s home or in locations that are comfortable for otherwise reticent groups (i.e. offices of service agencies, churches, libraries) as long as privacy can be maintained. As the location set up is also important, the interviewer needs to check beforehand that the location available is appropriate, especially when it is the participant’s home or any other public place. It is necessary that both the interviewer and the respondent feel comfortable with the location and with the furniture and that the interview be conducted without external interruption or distraction. The interviewer and the respondent should be sitting face-to-face or at 90 degrees, not close and out of direct sunlight. Equipment: It is common practice to record cognitive interviews and sometimes videotape them. Video recording is not always feasible due to the location, the equipment needed, or the lack of consent from the respondent; nevertheless its use makes it possible to study also the behavior of the respondent (as well as the performance of the interviewer for further improvement). Moreover, depending on the type of survey instrument being tested, e.g. a self-compilation questionnaire, video recording may be a requisite. In this case, it is important that this information be clearly highlighted in the recruitment materials and/or during the face-to-face recruitment to avoid any last minute refusal from the respondent. Generally, when the interview is recorded it is better to hide the camera or make it unobtrusive. The recording (audio or video) serves two purposes: - It frees the interviewer from the pressure of having to make detailed notes during the interview. Instead he/she can focus on listening to the participant and asking the appropriate spontaneous, exploratory questions to probe further ( if the protocol allows it) - It provides a full record of everything being said during the cognitive interview. This material is essential for in-depth analyses. The recording can then be transcribed or used to create a written summary after the interview (Gray M., 2014; Miller K. et al., 2014) Prior to the interview, it is recommended to check all the equipment that will be used. Length of the interview: Taking part in a cognitive interview requires sustained concentration from respondent and interviewer alike. Although there have been interviews lasting from 15 minutes to 2 hours, the general consensus is that 1 hour or 1 hour and half is a reasonable time and longer interviews would put too much strain on both parties. 32 Indeed, the quality of the information gathered during the interview is strictly related to the respondent’s level of attention and motivation; therefore, it may be advisable to divide a long questionnaire into parts to be tested separately. Also, it may well be that only some questions of the questionnaire need to be tested as others have already been tested and used in other surveys. The length of the interview depends on several factors (Gray M., 2014; Willis G., 2005): the complexity of the questions, the number of questions tested, and based on the approach adopted respondent’s ability to think aloud or the amount of probing. There are also differences in the overall speed in which respondents answer the targeted survey questions or the probe questions. Furthermore, the number of skip patterns also affects the number of questions the respondents have to answer and be probed on and the length of the interview. Occasionally, the respondent may show signs of fatigue and his/her attention and motivation dwindle, in that case it is better to cut short the interview because the following questions would not be tested properly. Number of interviews per day: Considering that a cognitive interview requires a lot of attention and concentration on the part of the interviewer too, a limited number of interviews should be conducted in one day. This number may vary depending on the complexity of the question to be tested and the level of expertise of the interviewer as well as on the characteristics of the respondent (with some populations interviews are more uncomfortable). Furthermore, in some cases the interviewer has to travel between locations, which is time consuming. Willis (2005) recommends a maximum of three interviews, especially when they are based on a less structured probing style, which entails greater autonomy and interaction on the part of the interviewee. Number of cognitive interviewers: Based on number of interviews, resources and time available, the number of the interviewers may vary. Of course, the more interviewers are involved, increasing the flexibility of where and when interviews can take place, the faster the project moves forward. Many interviewers can also increase the chances of identifying problems and ensure a richer discussion on how to solve them; nevertheless this requires additional effort to ensure consistency in the way the interviews are performed, especially when spontaneous probes are allowed. In this regard, training and interviewer briefing are even more essential. Respondent's consent and respect of privacy and confidentiality: For the potential respondents to be able to make an informed decision on whether to participate in the cognitive interview project, they need to receive detailed information, such as: who is conducting the research, what is the topic of the questions, how long the interview will take, if the interview will be recorded and how the collected information will be used and by whom. Furthermore, they need to understand that their participation is on a voluntary base, and that they can skip questions they do not want to answer and even withdraw at any time during the interview. It is also important to let them know whether they will get a token for their participation and/or a reimbursement for travel expenditures. Other relevant issues to be addressed are privacy and confidentiality, especially when private and sensitive information is to be gathered. Therefore, the potential respondents need to know how privacy and confidentiality will be maintained during all phases of the cognitive test project and for all material collected. This information must be provided during recruitment and repeated before the interview. Details that could potentially identify participants should not be included in any report or any other type of research outputs, and the data collected have to be used only for the purpose stated. Names and contact details should be stored securely and not be kept after the end of the project unless participants have given their consent. Of course, also the tape interviews and other documentation must be stored under lock and key, labelled with subject ID numbers rather than personal identifiers such as names, and finally destroyed. The respondents are requested to sign consensus forms; there are usually three of them: a) for participating in the interview; b) for recording; and c) a specific consensus form if the interview is to be videotaped. The last two have to be distinct because some respondents may not consent to both. 35 - Listen to the respondent carefully, taking only few notes that may help you probe to clarify points, especially if the interview is recorded - When summarizing what has been said, check with the respondent to avoid errors or misunderstandings - Allow the respondent to make mistakes as they may indicate problems with the test questions - Accept change to the answer provided to the test questions and investigate the reasons for it - Offer encouragement and acknowledge difficulty - Watch for visual clues (e.g. smiling, frowning, irritation...) and make sure you ask the respondent why he/she reacted in this way - Monitor the time during the interview - Cover all the material in the interview test protocol if time permits but be aware of signs of distraction or fatigue - It should be emphasized that the interview is not testing the respondent but the questionnaire to assess whether it is difficult to understand or administer. - Do not assume that the respondent had no problems with a question just because no other interviewee had problems with that question - Make the interviewees feel comfortable getting the most out of their cooperation, for example by using phrases such as "I did not write these questions, so you should not worry about criticizing them." - Reassure the respondent that not understanding a question is perfectly ok because it is the question that is under evaluation and not he/she or the answer provided Finally, the interviewer must be professional, friendly, flexible, at ease and non- judgemental. Data Management, Analysis and Interpretation Generally, each cognitive interview generates several types of raw data (also called primary data): - Socio-demographic and background information about each respondent - Recording of interview and/or transcripts of recording - Completed test questionnaire - Interviewer’s observations and notes (reordered in the interview protocol) and - Interviewer’s notes written after the each interview The socio-demographic and background information about the respondent is collected during the recruitment phase as well as at the beginning of the interview. If the interview is either audio or video recorded, the recording provides the most complete information, capturing everything the participant said and did during thinking aloud and probing. A decision should be made whether or not to transcribe the recording, taking into account that it is a really time-consuming activity. If the interview is not recorded, the interviewer needs to put in extra effort in taking very detailed notes. Following the respondent’s answers to the draft survey questions (which are also available), and depending on the approach used, the interviewer may have made notes of any inconsistencies or unclarity in the answers for further probing, or he/she may also have filled in an observation checklist (e.g. skipping of questions, changing answers after probes, problems in selecting answer categories…). In some cognitive test projects, an observer is also present during the interview to take notes and compile an analysis grid of the respondent's behaviour. Finally, it is generally recommended to prepare a first summary - right after each interview and before the next one -, reviewing any notes written in the protocol and other material used by the observer. Irrespective of the approach used, verbal probing or think aloud, the main output of cognitive interviews is still verbal texts that need to be analysed to determine whether or not respondents have a problem with a particular question. The analysis consists essentially in extracting information from these verbal reports about question performance. 36 There are different procedures practitioners can use to review the verbal reports (Blair J. & Brick P., 2010; Willis G., 2015). Whatever the procedure adopted, however, it is essential to reduce and organize the information, and to provide a recap for each participant (interview notes or interview summaries) of all the relevant information collected before proceeding to a systematic analysis. It is common practice to use a template for each interview to gather in a single document all the relevant data from all sources (e.g. audio recordings, observations, completed questionnaires….). This way, all interviews will be consistent and report their findings in the same order and with a similar degree of details. All interview summaries should provide a full and accurate record of all that occurred during the interview. They should be clear enough for someone who was not present at the interview to be able to understand what happened. They should be concise and accurately reflect what was said and done in the interview, by both participant and interviewer. According to Miller et al. (2014) and d’Ardenne & Collins (2015) the written summary should: - Always provide the original answer to each test question given by the participants. Sometimes the respondents may change their initial answer offering a revised response; both have to be reported in the summary explaining how and why the original answer has been changed; - Describe both positive and negative findings on how the questions performed; - Specify the type of difficulties the respondent may have experienced in answering the question (e.g. the question/answer categories had to be repeated, clarifications were asked…) - Report any confusion the respondents may have shown regarding the survey questions, contradictory answers to the various survey questions and inconsistencies in the respondents’ narrative explaining how and why that answer has been given; - Always provide a clear distinction between participant-driven findings (think aloud, general probes) and interviewer-driven findings (specific probes); - Use examples and quotes that can better illustrate and substantiate the summary; - Make a clear distinction between quote and summary of what has been said; - Describe the findings using as much detail as possible, if any area was not covered it should be stated in the notes. In order to produce an accurate interview note/summary, it is essential to listen to the recordings or refer to the written transcript of recordings, when available, beside the interviewers’ note. The next stage of data management consists in merging all interview summaries into a single-data set ready for analysis. d’Ardenne and Collins (2015) suggest using the Framework method first described by Miles and Huberman (1994) and then expounded in more detail in Spencer et al. (2003). This is a matrix-based approach for managing qualitative data, whereby the content of each interview is entered into a grid with each row representing an individual participant and each column an area of investigation. Data within the grid can then be read horizontally as a complete case summary for each participant, or vertically for a systematic review of the data collected per area of investigation. Depending on your research aims, it can be decided to create one matrix for each question tested or to put blocks of similar questions together into a single matrix. When creating Framework matrices, it is critical that the most important information be entered under each column heading, covering all the topics addressed in the summary templates. Therefore, it will be necessary to revisit the notes of each interview in order to select the most relevant and essential information for input into each cell of the matrix. In doing this, it will also be possible to check the quality of the original notes, and it may happen that a cell left empty can be filled by returning to the primary data and finding the missing information. Framework matrices can be set up using a non-specialist software (such as an Excell spreadsheet) or a specialist, qualitative data analysis software (such as NVivo developed by the NatCen Social Research (Spencer et al., 2014). Another specialist software is Q-Notes21, developed by the Collaborating Center for Question Evaluation and Design Research at the USA National Center for Health Statistics, in order to assist with the collection, 21 The characteristics of Q-notes are fully describe by Mezetin Justin and Massey Meredith “Analysis software for cognitive Interviewing studies: Q- notes and Q-Bank”, in Miller K, Willson S., Chepp V., Padilla Josè Luis (eds) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey 37 organization, and analysis of cognitive interviewing data. Indeed, it is more than a simple data management system. Q-notes allows researchers to conduct the various tiers of analysis in a systematic, transparent fashion; it allows for easy and accessible data entry by centralizing the data entry process. All interview data are located in a single convenient project location rather than separate files and folders. This centralization allows for consistency across interviews because all data are recorded in a uniform format. Q-notes is uniquely designed to help researchers analyse when conducting interviews, summarizing interview notes, developing thematic schemas, and drawing conclusions about question performances (Mezetin J. & Massey M., 2014). Despite wide recognition of cognitive interviewing as an established and respected pretesting method, there is little consensus on how to handle verbal reports or carry out the analysis in order (Blair J. & Brick P., 2010; Collins D., 2007, Collins D., 2014; Ridolfo H. & Schoua-Glusberg A., 2011) to produce reputable findings (Miller K.et al. 2014). Willis (2015) wrote a book specifically on analysis strategies for the Cognitive Interview and presented five different analysis models. Indeed, depending on the chosen approach the analysis can be conducted with or without a coding scheme. In particular, he called “text summary” (aggregated comments of the cognitive interviewer(s)) the approach that does not use a coding scheme, which focuses on the description of dominant themes, conclusions, and problems highlighted by a set of aggregated interviewer notes. Other models use coding schemes that enable them to reduce data in a more formal manner than the mere description of basic findings. This coded approach can be top-down (also called deductive codes), involving the development and application of potential coding categories prior to data analysis, or bottom-up, building codes from data. In the Top-down code approach, two are the main variants used by analysts according to their theoretical viewpoint : cognitive coding, which refers to the Tourangeau four-stage model, and question feature coding, which centers on the characteristics of the problems raised by the questions (e.g. Question Appraisal System QAS). These can also be combined. Furthermore, in the context of cross-cultural cognitive interviewing other coding schemes have been developed (e.g. General Cross-Cultural Problem Classification by Willis G. and Zahnd E., 2007; The Appraisal System for Cross-National Surveys by Lee J., 2014), the Cross-National Error Source Typology CNEST by Fitzgerald R., Widdop S., Gray M., and Collins D., 2011). The bottom-up coding uses inductive approaches to coding analysis. Based on an intensive analysis of the raw data, codes are built from the ground up. The most common codes relate to Theme and Pattern of interpretation. The procedures used in cognitive interviewing, or even in the early phases of analysis, may be somewhat similar to the Text Summary approach, where cognitive interview results are mined for meaningful information. These codes are built from either quotes, or notes, or a combination of both; however, rather than achieving data reduction only through the production of a text-based summary, the analyst instead develops codes to represent key findings and applies these to each occurrence within an interview that is deemed noteworthy . The analysis is an iterative process for which enough time must be set aside when planning the cognitive test. Materials may need to be reviewed several times for inconsistencies or missing information. According to Miller “The general process for analysing cognitive interviewing data involves synthesis and reduction – beginning with a large amount of textual data and ending with conclusions that are meaningful and serve the ultimate purpose of the study” (Miller K., 2014, page 36) by following five incremental and iterative steps: conducting interviews, producing summaries, comparing across respondents, comparing across subgroups of respondents, and reaching conclusions (Miller K., 2014). The analysis process involves two stages: description and interpretation. The first one focuses on understanding how the survey question has been interpreted and responded to and the factors that influence interpretation and response, therefore outputs of the descriptive analysis are: - the identification and classification of “potential” errors - a map of the circumstances in which errors occur - a description of who was affected. The second stage, the explanatory analysis, investigates how “potential” problems and patterns in the problem observed have emerged, therefore main outputs are: 40 Conclusion The cognitive interviewing method allows collecting in-depth, thematic understandings of patterns and processes used by respondents in their attempt to answer survey questions, therefore to reveal problems in the questions that would not normally be identified in the traditional interview. Nevertheless this method is, like other pre-testing method, not without limitations. The main criticisms underline the following aspects: While cognitive interviewing studies identify problems in the questions it cannot determine the extent to which these would occur in a survey sample because this qualitative method involves an in-depth interviewing approach with a purposive sample typically small. It cannot provide quantitative information on the extent of the problem or the size of its impact on survey estimates. The Cognitive interview is focused on the question and answer process and therefore survey questions It is not possible to assess questionnaire length because the survey question are interspersed with the verbalization and exploration of thought processes. The context in which questions are asked may impact on how participants answer or how willing they are to make the cognitive effort necessary to answer; and if probing occurs after each question or set of Qs it may change the way in which participants think about subsequent questions. The cognitive interviews techniques count on participant’s ability to articulate thought processes in order to identify “hidden or covert problems" but not everyone can do this well. Furthermore, not all thought is conscious and therefore capable of being articulated. The collection of participants’ verbal reports is subject to error: cognitive Interviewers may not ask the survey question as worded, may not ask the same probes or in the same way. The rapport build between interviewer and participant may hide problems, such as non-response that may happen in the actual survey. Even with these limitations, the cognitive interviewing method is widely used as an effective tool to pretest questionnaires because it helps to identify different types of problems that respondents encounter, provides evidence about why these problems occur and identify the phenomena or sets of phenomena that a variable would measure once the survey data is collected. References Beatty P. (2004). The Dynamic of Cognitive Interviewing. In Presser S., Rothgeb J. M., Couper M. P., Lessler J. T., Martin E., Martin J., and Singer E. (eds) Methods for Testing and Evaluating Survey Questionnaires, Hoboken, NJ: John Wiley and Sons. Beatty P. & Willis G. (2007). Research Synthesis: The Practice of Cognitive Interviewing. Public Opinion Quarterly. American Association for Public Opinion Research, 2007. Blair, J., F. Conrad, A. Ackermann, e G. Claxton. The Effect of Sample Size on Cognitive Interview Findings. Paper presented at the American Association for Public Opinion Research Conference, Montreal, Quebec, Canada, 2006. Blair J. & Brick P.D. (2010). Methods for the Analysis of Cognitive Interviews, Proceedings of the Section on Survey Research Methods, American Statistical Association, 3739–3748. Boeije H. & Willis G. (2013). The cognitive Interviewing Reporting Framework (CIRF).Towards the harmonization of cognitive testing reports. Methodology. European Journal of Research Methods for the Behavioral and Social Sciences, Vol. 9; n. 3, 2013 Collins D. & Gray M. (2015). Sampling and recruitment. In Collins D. (Ed) Cognitive Interviewing Practice, NatCent Social Research, Sage London Collins D. (2007). Analysing and interpreting cognitive interview data: a qualitative approach. Paper given at the Questionnaire Evaluation Standards Workshop, Ottawa, Canada 41 Conrad, F., & Blair J. “Sources of error in cognitive interviews.” Public Opinion Quarterly, Vol. 73, No.1, 32-55. Spring 2009. Conrad F. & Blair J. (2004). ‘‘Data Quality in Cognitive Interviews: The Case of Verbal Reports’’ In Methods for Testing and Evaluating Survey Questionnaires, (eds) Stanley Presser, Jennifer M. Rothgeb, Mick P. Couper, Judith T. Lessler, Elizabeth Martin, Jean Martin, and Eleanor Singer. Hoboken, NJ: John Wiley and Sons. DeMaio, T. J., & Rothgeb J. M. (1996). Cognitive interviewing techniques: In the lab and in the field. In N. Schwarz & S. Sudman (Eds.), Answering questions: Methodology for determining cognitive and communicative processes in survey research (pp. 177–195). San Francisco, CA: Jossey-Bass. DeMaio, T. J., & Rothgeb J., and Hess J. M. (1998). Improving survey quality through pretesting (working paper in Survey Methodology No. 98/03). Washington, Dc: U.S. Census Bereau. d’Ardenne J. & Collins D. (2015). Data Management. In Collins D. (Ed) Cognitive Interviewing Practice, NatCent Social Research, Sage London. Fitzgerald, R., Widdop, S., Gray, M., & Collins, D. (2011). Identifying sources of error in cross-national questionnaires: Application of an error source typology to cognitive interview data. Journal of Official Statistics, 27(4), 569–599. Gray M. (2015). Conducting Cognitive Interviews. In Collins D. (Ed) Cognitive Interviewing Practice, NatCent Social Research, Sage London. Lee, J. (2014). Conducting cognitive interviews in cross-national settings. Assessment, 21(2), 227–240. Mezetin J. & Massey M. (2014). “Analysis software for cognitive Interviewing studies: Q-notes and Q- Bank”, in Miller K, Willson S., Chepp V., Padilla Josè Luis (eds) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey. Miles M.B. & Huberman A.M. (1994). Qualitative Data Analysis: An Expanded Sourcebook. London: Sage Publications. Miller K. (2014). Introduction. In Miller K, Willson S., Chepp V., Padilla Josè Luis (eds.) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey. Miller K. (2011). Cognitive Interviewing. In Madans J., Miller K., Maitland A., Willis G. (eds.) Questions Evaluation Methods, Wiley John & Sons, Hoboken, New Jersey. Miller K. (2002). The Role of Social Location in Question Response: Rural Poor Experience Answering General Health Questions. Paper presented at the American Association for Public Opinion Research Conference held in St. Pete Beach, Florida, May 2002. Miller K., Willson S., Chepp V. and Rayn M. J. (2014). Analysis. In In Miller K, Willson S., Chepp V., Padilla Josè Luis (eds) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey. Ridolfo H. & Schoua-Glusberg A., (2011). Analyzing Cognitive Interview Data Using the Constant Comparative Method of Analysis to Understand Cross-Cultural Patterns in Survey Data. Field Method Schwarz N. (2007). Cognitive aspects of survey methodology. Applied Cognitive Psychology, 21, 277-287. Schoua-Glusberg A. & Villar A. (2014). Assessing translated questions via cognitive interviewing. In In Miller K, Willson S., Chepp V., Padilla Josè Luis (eds.) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey. 42 Spencer L., Ritchie J., and O’Connor W. (2003). Analysis: practices, principles and processes, in J Ritchie and J. Lewis (eds), Qualitative Research Practice (1st edition). London: Sage Publications pp 199-218. Survey Research Center. (2016). Guidelines for Best Practice in Cross-Cultural Surveys. Ann Arbor, MI: Survey Research Center, Institute for Social Research, University of Michigan. Retrieved Month, from http://www.ccsg.isr.umich.edu/. Tourangeau, R. (1984). Cognitive science and survey methods: A cognitive perspective. In Jabine T. B., Straf M. L., Tanur J. M., Tourangeau R. (eds.), Cognitive aspects of survey methodology: Building a bridge between disciplines (pp. 73-100). Washington, DC: National Academy Press. Willis G. (1994). Cognitive interviewing and questionnaire design: a training manual. (Cognitive Methods Staff working Paper Series, No.7). Hyattsville, MD: National Center for Health Statistics. Willis, G. (2015). Analysis of the Cognitive Interview in Questionnaire Design, Oxford University Press 2015. Willis, G. (2005). Cognitive interviewing: a tool for improving questionnaire design. Thousand Oaks. London, Sage publications. Willis G. (2004). Cognitive Interviewing revisited: A useful technique, in theory? In S. Presser, J. Rothgeb, M. Couper, J. Lesser, E. Martin, J. Martin, et al. (eds.), Methods for testing and evaluating survey questionnaires. New York: John Wiley & Sons. Willis, G. (1999). Cognitive Interviewing: A “How To” guide. Rockville, MD: Research Triangle Institute. Willis G., Schechter S. and Whitaker K. (1999). A comparison of cognitive interviewing, expert review, and behaviour coding: What do they tell us? Proceedings of the Section on Survey Research Methods, American Statistical Association, 28-37. Willis G. & Zahnd E. (2007) Questionnaire Design from a Cross-Cultural Perspective: An Empirical Investigation of Koreans and Non-Koreans. Journal of Health Care for the Poor and Underserved · December 2007 DOI: 10.1353/hpu.2007.0118. Willson S. & Miller K. (2014). Data Collection. In Miller K, Willson S., Chepp V., Padilla Josè Luis (eds) Cognitive Interviewing Methodology, Wiley John & Sons, Hoboken, New Jersey. 45 APPENDIX B RECOMMENDATIONS FOR COGNITIVE TESTING OF THE EU QUESTIONNAIRE ON GBV AGAINST WOMEN AND MEN Preamble The primary purpose of the cognitive interview is to investigate how well questions perform when asked of survey respondents, that is, if respondents understand the question correctly and if they can provide accurate answers. Cognitive testing insures that a survey question successfully captures the scientific intent of the question and, at the same time, makes sense to respondents. In evaluating a question’s performance, cognitive testing examines the question-response process (a process that can be conceptualized by four stages: comprehension, retrieval, judgment and response) and considers the degree of difficulty respondents experience as they formulate an accurate response to the question. Data from cognitive interviews are qualitative, and analysis of those interviews can indicate the sources of potential response error as well as various interpretations of the question. By conducting a comparative analysis of cognitive interviews, it is possible to identify patterns of error and patterns of interpretation also across groups of people. This type of analysis is especially useful when examining the comparability of measures, across countries. Even in one country, survey participants can have very different cultural backgrounds and/or may need to complete the survey questionnaire in languages other than the main language of that country. Comparability can be undermine, among others things, by a number of problems also related to how a translated question works. The mains reason why questions might not perform as intended are: a) problems arising from translation choices, b) cultural factors influencing interpretation even in “perfectly” translated questions, c) lack of construct overlap between the language/culture of the translation and the language(s) for which the questions were designed. Recommendations for the cognitive test Recruiting Respondents and Sample size The sample selection for the cognitive test is “purposive”: respondents are not selected through a random process, but rather are selected for specific characteristics related to the target survey population and other characteristics related with the topic under investigation. For this cognitive test, the sample should be composed, at minimum, by people aged above 18 years of both sex, with different level of education and who might or might not have been subjected to violence. Considering the fact that there are filtered questions persons who are working or have worked and unemployed should be included as well as persons with different civil/relationship status. If it is possible they should be from urban and rural areas. It is important that all the questions will be covered by the people recruited. Because the cognitive test sample is purposive and not random, respondents can be recruited by a variety of informal methods23. For recruiting persons who have been subject to violence it can be useful to contact association or other entity working in the field. In recruiting prospective participant, it is important to inform them about the necessity to recorder the interview and the privacy and confidentiality issues are addressed in the project. 23 See Section D Cognitive Interviewing in “Qualitative Methodologies for Questionnaire Assessment”. 46 Each country should be carried out at minimum 10 interviews, more if it is possible (up to 20- 25). It is always better to identify more than the exactly number of responded needed, in order to avoid problems in the event that some potential participant eventually decides withdraw his/her participation. Selecting and Training Interviewers The cognitive test questionnaire was developed to be easily administered. However, if it is possible to select interviewers who understand the purpose of the test and who have experience conducting survey interviews, the data collected will likely be of higher quality. The interviews have to be conducted by interviewers who are native speakers of the target language, so that they are sensitive to subtle nuances that other fluent speakers of the language might not understand. The number of interviewers that are required will depend on the available resources, the interviewers’ expertise, time constraints and especially on the number of interviews that will be conducted. It is suggested not to have only one interviewer to carry out all interviews even if the sample with 10 persons, and not to have more than 5 if the sample is with 20 persons. To train interviewers24, read through the questionnaire and the cognitive interviewing protocol. Be sure that interviewers understand the purpose of the interview. It is essential that interviewers understand that they are not to correct or help the respondent to answer questions; that must read the question exactly as it is written and then record the answer as it is reported by the respondent. If the respondent cannot answer the question, the interviewer should record “don’t know” and then continue on to the next question. Interviews Each interview should last about one hour, though the exact length will vary depending on the respondent’s characteristics such as the speed with which he/she answer to the tested questions and his/her ability to answer to the probe questions by providing useful information for the research goals. General recommendations on how to conduct a cognitive interview are provided in the section D Cognitive Interviewing of the “Qualitative Methodologies for Questionnaire Assessment” report, while a proposal of interview protocol tailored for the cognitive test of the EU Survey on Gender-based Violence is provided in Annex C. The cognitive interviewing approach is based on verbal probing with suggested general and specific probes to be posed after each questions or a set of questions. Some questions have proposed to go under testing but it may be left to the countries to remove or add other questions as well as to modify the suggested probes to meet individual country needs. The interviews should be audio recorded, previous consent from the participant. In recruiting potential participants this issue need to be clearly addressed. If the research manager or the respondent does not agree to tape the interview, due to the sensitive topic of the research, the interviewer need to put an extra effort in taking notes during the interview. Alternatively, another researcher may participate only playing the role of taking notes. In this case, the second researcher’s role needs to be clearly explained to the participants. However, it is definitely preferable to tape the interview. The confidentiality of all information provided by the responded must be clearly stated before the interview begins. Debriefing Interviewers If possible, discuss the interviews with the interviewers after they are completed. Did some questions in the protocol not work? If so, which questions? What seemed to be the problem with the question? Did any questions work particularly well? If so, which questions? Be sure to document and report interviewer perceptions at the end of the data collection sheet. This information will provide valuable insight when performing analysis of the cognitive test data. Entering Data, and Performing Analysis It is important to use a spreadsheet for recording data in a uniform way. An example is enclosed at the end of this document. It may be adapted to any specific change made to the testing protocol suggested to meet 24 Ibidem 47 country specific needs. For each cognitive interview a spread sheet should be filled in with the information collected and then all interview summaries should be merged into a single-data set ready for analysis25. The demographic sections provide essential background information that will be used to understand whether the questions work consistently across all respondents, or if gender or level of education or territorial variables impact the ways in which respondents interpret the question or other aspects of the question response process. A report should be written26 including the main results and suggestions for improving the questions tested for the EU GBV against women and men. 25 Ibidem 26 Ibidem 50 INTRODUCTION OF THE INTERVIEW << READ OUT TO RESPONDENT>> Thank you for agreeing to participate in this interview. The purpose of this project is to develop questions about violence against women and men that will eventually be asked of many people of all ages around Europe. Therefore we are testing new questions with the help of people such as yourself. In particular, we need to find out if the questions make sense to everyone and whether everyone understands the questions in the same way. Your interview will help us find out how the questions are working. During the interview I will ask you questions and you answer them, just like a regular survey. However, our goal is to get a better idea of how the question are working. After answering to each question or a set of questions I will ask you to explain how did you come out with that answer and I will ask more questions to know any problems in the question, Please keep in mind that I really want to hear all of your opinions and reactions. I did not personally developed the questions so don’t hesitate to indicate if something seems unclear, or it is hard to answer or any other problems that may arise from the questions and the answer categories. I will take some notes but to be sure to collect of all your answers, I ask you the permission to tape this interview. Recording will be used only by the researchers working on the project. Everything that you tell me is confidential and will be kept private. The information will be used only for the aims of the research. If you do not want to answer a question, please tell me and I will move to the next question. Finally, your interview will last about one hour, unless I run out of things to ask you before then. Before we begin, do you have any questions? Record for Each Respondent Respondent’s sex: 1. Male 2. Female Region of residence: _______ Degree of urbanisation: _______ Respondent’s age in completed years: _______ years Respondent civil/relationship status _______ Respondent’s educational attainment level: _______ Respondent’s main activity status: _______ Next questions are about your working life. Some people might be experienced unwanted behaviour with a sexual connotation by persons in the workplace, for example a colleague or co-worker; boss or supervisor; client, customer or patient, which made to feel offended, humiliated or intimidated. Please, think about your current workplace or about your last workplace, if you are currently not employed. 51 D1 Have you experienced inappropriate staring or leering that made you feel intimated or somebody in the work environment sent or showed you sexually explicit pictures, photos that made you feel offended, humiliated or intimidated? 1 Yes 2 No 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) D4 Have you experienced that somebody in the work environment sent you sexually explicit emails, text messages or made inappropriate, humiliating, intimidating or offensive advances on social networking websites or internet chat rooms (for example Facebook, WhatsApp, Instagram, linked in, twitter etc.)? 1 Yes 2 No 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) D5 Have you experienced that somebody in the work environment threatened disadvantages if you were not sexually available? 1 Yes 2 No 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) SUGGESTED PROBES (for each question) How did you come up with this answer? What went on in your mind when you were asked the question? Was that easy or difficult to answer? Why? What were you thinking? Are you thinking of a specific situation? What would you say that question was asking of you? What does the term “----“ mean to you? In your words, what is “------“? What time period were you thinking about when you answered this question? How did you feel about answering this question? Do you find this question too personal/intrusive or embarrassing? Why? Do you think other people would find this question sensitive? Why? Remind and encourage probes: “Can you tell (me) more what are you thinking?” “Keep talking…” Interviewer reminders: Did the respondent ask to have the question repeated? If so, what part of the question did the respondent find confusing? What kinds of trouble (if any) did the respondent have in answering the question? 52 Now we will talk about other situations that may occur in daily life. You may have been in a situation where the same person has been repeatedly offensive or threatening towards you to the point of scaring you, causing you anxiety or forcing you to change your habits. For the next questions I would like to ask you to think about both your current and previous partners as well as other people. N1 During your lifetime has the same person repeatedly done one or more of the following things to you: N1.1 … sent you unwanted messages, phone calls, emails, letters or gifts which cause fear, alarm, or distress? 1 No 2 Yes, once 3 Yes, more than once 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) N1.6 During your lifetime has the same person repeatedly made offensive or embarrassing comments about you, inappropriate proposals, or published photos, videos or highly personal information about you on the internet or social networks? 1 No 2 Yes, once 3 Yes, more than once 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) SUGGESTED PROBES (for each question) How did you come up with this answer? What went on in your mind when you were asked the question? Was that easy or difficult to answer? Why? What were you thinking? Are you thinking of a specific situation? What would you say that question was asking of you? What does the term “----“ mean to you? In your words, what is “------“? What time period were you thinking about when you answered this question? How did you feel about answering this question? Do you find this question too personal/intrusive or embarrassing? Why? Do you think other people would find this question sensitive? Why? Remind and encourage probes: “Can you tell (me) more what are you thinking?” “Keep talking…” Interviewer reminders: Did the respondent ask to have the question repeated? If so, what part of the question did the respondent find confusing? What kinds of trouble (if any) did the respondent have in answering the question? Was the respondent thinking within the time frame of the lifetime? 55 4 Never 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) SUGGESTED PROBES (for each question) How did you come up with this answer? What went on in your mind when you were asked the question? Was that easy or difficult to answer? Why? What were you thinking? Are you thinking of a specific situation? What would you say that question was asking of you? What does the term “----“ mean to you? In your words, what is “------“? Was it easy or difficult to choose that particular answer? Why? Are there any categories missing from the options provided or do they cover everything? What is missing? How did you feel about answering this question? Do you find this question too personal/intrusive or embarrassing? Why? Do you think other people would find this question sensitive? Why Remind and encourage probes: “Can you tell (me) more what are you thinking?” “Keep talking…” Interviewer reminders: Did the respondent ask to have the question or the answer categories repeated? If so, what part of the question did the respondent find confusing? What about the answer categories? What kinds of trouble (if any) did the respondent have in answering the question? Next questions are about experiences that people may have in childhood with their parents, stepparents or with other persons with whom they grow up. P5 BEFORE YOU WERE 15 years old, did your parents (stepparents, foster parents) belittle or humiliate you with their words? Did it happen very often, often, sometimes or never? 1 Very often 2 Often 3 Sometimes 4 Never 8 Don’t want to answer (DO NOT READ) 9 Don’t know/Can’t remember (DO NOT READ) SUGGESTED PROBES (for each question) How did you come up with this answer? What went on in your mind when you were asked the question? Was that easy or difficult to answer? Why? 56 What were you thinking? Are you thinking of a specific situation? What would you say that question was asking of you? What does the term “----“mean to you? In your words, what is “------“? Are there any categories missing from the options provided or do they cover everything? What is missing? Was it easy or difficult to choose that particular answer? Why? What time period were you thinking about when you answered this question? How did you feel about answering this question? Do you find this question too personal/intrusive or embarrassing? Why? Do you think other people would find this question sensitive? Why Remind and encourage probes: “Can you tell (me) more what are you thinking?” “Keep talking…” Interviewer reminders: Did the respondent ask to have the question repeated? If so, what part of the question did the respondent find confusing? What kinds of trouble (if any) did the respondent have in answering the question? Was the respondent thinking within the time frame of “before he/she was 15 years old” or some other time frame? The Spreadsheet for recording the data (R=respondent) Date interview: Interview Code: Question Code Answer Understanding question / What R thoughts / Examples Understanding words and phrases Answer categories R's tips to change the question or terms Oth- er Difficulties in understanding the purpose of the question R asks to repeat the question R has difficulties with terms or phrases R asks to repeat answer categories R change answer after probing Question creates embarrass ment R doesn't answer Other issues Interviewer's opinions Emba- rrassing question Question difficult to read Other (specify) Sex Age Region of residence Degree of urbanisation Educational attainment Civil/relation ship status Main activity status D1 D4 D5 N1.1 E1 E26 E28 E30 E32 E34 E36 H1.1 H1.4 P5 Interviewer general notes: Place of the interview (e.g. home/office/ others): Date of compilation: