Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Quantitative Research Methods in Social Sciences: Experiment Design, Sampling, and Bias, Exams of Psychology

An in-depth exploration of quantitative research methods in the social sciences, focusing on experiment design, sampling techniques, and potential sources of bias. Key topics include the importance of proper study design, the use of mild deception, population generalization, and the role of pre-tests, pilot tests, and debriefing. The document also covers errors in sampling, leading questions, double-barreled questions, order effects, cooperation rate, interview context, social desirability bias, and non-reactive measures. It also discusses the chicago school phase i, theoretical sampling, and types of questions asked in field research interviews.

Typology: Exams

2023/2024

Available from 05/15/2024

CarlyBlair
CarlyBlair 🇺🇸

4

(1)

2.2K documents

Partial preview of the text

Download Quantitative Research Methods in Social Sciences: Experiment Design, Sampling, and Bias and more Exams Psychology in PDF only on Docsity!

Soc 497 Test 3

Four trends that sped the expansion of the experiment - o Behaviorism o Quantification o Treating Subjects Anonymously o Use of experimental methods for applied purposes Behaviorism - o a school of psychology founded in the 1920's that emphasized measuring observable behavior or outcomes of mental life and advocated the experimental method for conducting rigorous empirical tests of hypotheses. Quantification - o Measuring social phenomena with numbers. Researchers began quantifying social constructs such as (spirit, consciousness, and will). Scales and indexes were developed to measure abstract concepts. Example: Use of IQ tests to quantify intelligence Treating Subjects Anonymously - o Early social research reports contained then names of the specific individuals who participated, most were other professional researchers. o Later research treated participants anonymously. o Over time there was a shift to use college students and school children as research participants. o Over time the relationship between the experimenters and participants became more detached, remote and objective. Use of experimental methods for applied purposes - o Early social research reports contained then names of the specific individuals who participated, most were other professional researchers. o Later research treated participants anonymously. o Over time there was a shift to use college students and school children as research participants. o Over time the relationship between the experimenters and participants became more detached, remote and objective. Subjects - o Participants in experimental research o used instead of participants that would be found in qualitative experiments Random Assignment - o Participants are randomly assigned to either the experimental or control group. Experimental Group - The participants who receive the treatment.

Control Group - The participants who do not receive the treatment. Pre-Test - o A test that measures the dependent variable of an experiment prior to the treatment. o Ex: before the study takes place you can ask how often they smoke, what their heath is like, etc Independent Variable (Stimulus/Treatment) - o The independent variable in experimental research. Dependent Variable - o The outcome of the experiment. Post-Test - o A test that measures the dependent variable of an experiment after the treatment. o you would essentially ask the same questions as the pre test Classical Design - Random Assignment: Yes Pretest: Yes Control Group: Yes Experimental Group: Yes Deception - o A lie by an experimenter to participants about the true nature of an experiment or the creation of a false impression through his or her actions or the setting. o Giving the subjects a sugar pill Confederate - o A person working for the experimenter who acts as another participant or in a role in front of participants to deceive them with an experimenter's cover story. o someone who is in on the experiment like in the shock study. Cover Story - o A type of deception in which the experimenter tells a false story to participants so that they will act as wanted and do not know the true nature of the study. One-shot case study - Random Assignment: No Pretest: No Posttest: Yes Control Group: No Experimental Group: Yes Pre-Experimental Designs - Experimental plans that lack random assignment or use shortcuts and are much weaker than the classical design. They are substituted in situations where an experimenter cannot use all of the features of a classical design.

Quasi-Experimental - Plans that are stronger than pre-experimental designs, but are still variations of the classical design. They are used in situations where the experimenter has limited control over the independent variable. Interrupted Time Series - o An experimental plan in which the dependent o variable is measured periodically across many time points and the treatment o occurs in the midst of such measures, often only once. Equivalent Time Series - o An experimental plan in which there are several repeated pretests, posttest, o and treatments for one group often over a period of time. Latin Square - o An experimental plan to examine whether the order or sequence in which participants receive versions of the treatment has an effect. Solomon Four-Group Design - o An experimental plan in which participants are randomly assigned to two control groups and two experimental groups; only one experimental and control group receive a pre-test; all four groups receive a post-test. This design is used to check for the testing effect. Factorial Designs - o an experimental plan that considers the impact of several independent variables simultaneously. one-group pretest-posttest - Random Assignment: No Pretest: Yes Posttest: Yes Control Group: No Experimental Group: Yes static group comparison - Random Assignment: No Pretest: No Posttest: Yes Control Group: Yes Experimental Group: Yes two group posttest only - Random Assignment: Yes Pretest: No Posttest: Yes Control Group: Yes Experimental Group: Yes time series design -

Random Assignment: No Pretest: Yes Posttest: Yes Control Group: No Experimental Group: Yes Maturation Effect* - Selection Bias - occurs when groups in an experiment are not equivalent at the beginning of the study. o Example: In an experiment on physical aggressiveness, the experimental group unintentionally contains o subjects who are football, rugby, and hockey players; and wrestlers whereas the control group is made up of musicians, chess players, ballet dancers and painters. History Effect - results from an unplanned event occurring which is outside the control of the experimenter. o Example: A study is being conducted on people's fears. Half-way through a two- o week experiment an airplane crashes in the city where the experiment is being conducted. Compensatory Behavior - when participants in the control group modify their behavior to make up for not getting the treatment. Ex: Testing Effect - occurs because the very process of conducting a pre-test can have an effect on the outcome (dependent variable). Example: a researcher gives students an examination (pre-test) on the first day of class. Half of the students in the class are then assigned to the experimental group and receive extra help from a tutoring program while students in the control group do not. On the last day of class if any of the students learned from the pre-test it could effect how they perform on the post-test. This can throw off the results because the researcher does not know if the treatment (tutoring program) or exposure to the pre- test, or a mix of both is why students performance improved on the post-test Instrumentation - occurs when the instrument changes during the experiment. o Example: if a video camera breaks, recording device does not work properly, one of the observers gets sick and must be replaced by another, etc... Experimental Mortality - when some research participants do not continue throughout the entire experiment o Example: You begin a weight-loss study with 60 people. At the end of the program only 40 remain, each of whom lost 5 pounds. The twenty who left could have had different results than the 40 who stayed. Statistical Regression (Regression toward the mean) -

o a threat to internal validity from measurement instruments providing extreme values and a tendency for random errors to move extreme results toward the average. o Example: In an experiment if you happen to have individuals who do extremely well or extremely bad on a pretest (score really high, or really low) it is likely that their scores will get closer to the mean after the treatment when they are measured on the posttest. Diffusion of Treatment - when the treatment "spills over" from the experimental group and control group and participants modify their behavior because they have learned about the treatment. o Example: Subjects participate in a day long experiment on a new way to memorize words. During a break, experimental and control group subjects cross paths in the bathroom and begin talking. Double-Blind Study - a design intended to control experimenter expectancy. In this type of experiment the only people who have direct contact with participants do not know the details of the hypothesis or treatment. Experimenter Expectancy - o Once the inequality is known by another school system (the control group) participants in the control group work extra hard to prepare for the SATs to overcome the inequality Example: A researcher studying reactions toward those with disabilities deeply believes that females are more sensitive toward those with disabilities than males are. o Through eye contact, tone of voice, pauses, and other nonverbal communication, the researcher unconsciously encourages female research participants to report positive feelings toward those with disabilities. 10 things to avoid when constructing a survey - Jargon (technical terms): Ex. Plumbers talk about snakes; lawyers about uberrima fides; and psychologists about oedipus complex. Abbreviations: Ex. MSS can stand for several things: Manufacturers Standardization Society, Marine Systems Simulator, Medical Student Society, etc. Slang: Ex. bonkers (crazy), pad (apartment), munchies Demand Characteristics - o a threat to internal validity that occurs when research participants pick up clues about the hypothesis or an experiment's purpose and modify their behavior to what they think the research demands of them (ex. supporting hypothesis) o It is important for researchers to properly design the study and use mild deception as needed to protect against demand characteristics. Placebo Effect - o occurs when participants do not receive the real treatment but receive a non-active or imitation treatment but respond as though they have received the real treatment. o Example: You create an experiment to help heavy smokers stop smoking. You give the experimental group a pill to reduce nicotine dependence. o The control group gets a placebo (empty pill).

o If participants who received the placebo also stop smoking, then merely participating in the experiment and taking something that they believed would help them quit smoking had an effect. Population generalization - o When researchers can accurately generalize from what they learn in an experiment to a population. o Example: An experiment is conducted on 100 undergraduates at a o university. o To whom can the researcher generalize the findings? All undergraduate students at that university who attended that year? All college students in the same country? All people in society? o To improve the population generalization form of external validity in an experiment, researchers draw a random sample from the population and conduct the experiment with the sampled individuals. Naturalistic generalization - o whether or not a researcher can generalize accurately from what was learned in an artificially created controlled laboratory setting to real life natural settings. Mundane Realism - o asks whether an experiment or a situation is like the real world. o Example: You have children play with toys in a laboratory with 4 white walls. Mundane realism would be stronger if you created a real-life play area. Experimental Realism - o the impact of an experimental treatment or setting on people; it occurs when participants are caught up in the experiment and are truly influenced by it. It remains weak if subjects remain unaffected and the experiment has little impact on them. o Example: Stanford Prison Experiment (1971) Participants were randomly divided into 2 groups (prisoners and guards) and were put in a mock prison. Caught up in the experiment, the guards enforced authoritarian tactics, many prisoners accepted psychological abuse. Experiment was called off after 6 days because experimenters feared violence. Theoretical generalization - o asks whether the researcher can accurately generalize from an abstract theory that he or she is testing from a set of measures in the experiment. o It depends on: o Whether the experiment has strong experimental realism o Measurement Validity (how well variables are measuring what the researcher intends to measure) o Whether the researcher is able to control confounding variables (variables that could damage the internal validity of the experiment) Threats to External Validity -

  • A threat to external validity that occurs because participants are aware that they are in the experiment and being studied.
  • Example: College students who know they are part of an experiment may hold in their true feelings on gender roles because they do not want to appear sexist to the researchers.

Reactivity (Hawthorne effect) - o A reactivity result named after a famous case. o A series of experiments were conducted in the Westinghouse o Electric plant in Hawthorne, Illinois during the 1920's. o Researchers modified many aspects of working conditions (lighting, time for breaks, etc) and measured worker productivity. o They discovered productivity rose after each modification no matter what it was. o Workers did not respond to the treatment, but instead responded to the fact that they were being watched. Manipulation Check -

  • A separate measure of independent or dependent variables to verify their measurement validity and/or experimental realism.
  • Researchers check and look for potential flaws,
  • mishaps, or misunderstandings with: o Pre-tests o Pilot tests o Experimental debriefing Debriefing - is when you tell the participants what the experiment was actually. If they mention they acted differently or did not care during the study, you must throw out their results as this can har your validity.
  • Example: You have a confederate act as if he or she is disabled and have preliminary research participants observe the confederate. As a check, you ask whether or not the participants believed the confederate was truly disabled or just acting Field Experiments -
  • A study that takes place in a natural setting such as a subway car, a liquor store, or a public sidewalk. Participants are usually unaware that they are involved in an experiment and react in a natural way
  • Example: Researchers have a confederate fake a heart attack on a subway care to see how the bystanders react.
  • __________ __________ are conducted by journalists(not researchers) on the show "What Would You Do?" Lab Experiments - o tend to have higher internal validity but lower external validity. They are better controlled than field experiments but have less generalizability because they are conducted in labs and participants know that they are part of a study. Field Experiments - o tend to have higher external validity (since they are conducted in natural settings) but have lower internal validity because the researcher is unable to control many facets of the experiment. Internal validity - your dv is affecting you iv.. How do I apply this to the real world..

External validity - what is happening around you

  • Example: A wallet is dropped but it begins to rain.. You cannot control for this unexpected variable Planning and Pilot Tests - o During the planning stages researchers anticipate alternative explanations or threats to internal validity. o Develop a well organized system for recording data o Pilot test any apparatus (computers, video cameras, digital recorders, etc) Instructions to Subjects - o Experiments involve giving instructions to participants to "set the stage." o Researchers must word instructions carefully and follow a prepared script so that all participants hear the same thing. Post-experimental Interview - o If the researcher uses deception they must ethically debrief the research participant and explain the true purpose of the study and answer all of the participant's questions. o Researchers can learn in the interview what participants thought and how their definitions of the situation affected their behavior. o Researchers can explain the importance of not revealing the true nature of the study to other potential participants. Social Survey Movement - o (1860's-1930's) o started with 19th century social reform movements in the U.S. and Great Britain. o helped people document urban conditions and poverty produced by early industrialization. o It was an action oriented community research program that interviewed people and documented conditions to gain support for social and political reforms. o offered a detailed empirical picture of specific areas and combined sources of quantitative and qualitative data. o were part of qualitative community field studies. o were descriptive and did not use scientific sampling or statistical analysis. Expansion of survey research during WWII - o dramatically expanded during WWII, especially in the U.S. o Academic social researchers and practitioners from industry converged in Washington, D.C. to work for the war effort. o researchers received generous funding and government support to study civilian and soldier morale, consumer demand, production capacity, enemy propaganda, and the effectiveness of bombing. o Academic researchers helped practitioners with precise measurement, sampling, and statistical analysis, while practitioners helped academics learn the practical side of organizing and conducting surveys. Survey research after WWII - o officials quickly dismantled the large government survey establishment. This was done to cut costs and because political conservatives feared that reformers might use survey methods to

document social problems and advance policies that would help unemployed workers and promoted racial equality. After the war many researchers left government and returned to universities where they founded new social research organizations: National Opinion Research Center (University of Chicago) est. 1947 Institute for Survey Research (University of Michigan) est. 1949 o At first traditional social researchers were wary of quantitative research and skeptical of bringing a technique popular within private industry into the university. Errors in selecting respondents - o Sampling Errors (using a non-probability sample) o Coverage errors (poor sampling frame, omits certain groups of people o Non-response errors at the level of a sampled unit (respondent refuses to answer) Errors in responding to survey questions - o Non-response errors specific to a survey item (certain questions are skipped or ignored) o Measurement errors caused by respondent (ex. respondent does not listen carefully to question in face-to-face or phone survey) o Measurement errors caused by interviewer (interviewer is sloppy in reading questions or recording answers) Survey administration errors - o Post-survey errors (mistakes in cleaning the data or transferring data into electronic form. o Mode effects (differences due to survey method: mail, in person, internet) o Comparability errors (different survey organizations, nations, or time periods yield different data for the same respondents on the same issues) Leading (loaded) questions - can lead respondents to either positive or negative answers Double-Barreled Question - Two or more questions in one which causes ambiguity, you can not be sure of the respondent's intention. Fix: Ask two separate questions, one about pension benefits and one about health insurance Non Contingency Question - In the past year, how often have you used a seat belt when you have ridden in the backseat of a car? (Always, Sometimes, Never) false premise - occurs when a researcher begins a question with a premise that the respondent may disagree and then offer choices regarding the premise. This can irritate respondents and forces them into answer a question they may disagree. Factors that influence respondent recall - o The topic (is it threatening, embarrassing, socially desirable) o Timing of events (recalling events that occurred simultaneously or subsequently) o Significance of an event for a person

o The situational condition (question wording and interview style) Telescoping - o When respondents compress time when asked about past events. o Backward Telescoping (when they remember an event earlier than it actually occurred) o Forward Telescoping (when they remember an event after it actually occurred) Techniques to reduce telescoping - Situational framing Decomposition Landmark anchoring Bounded recall Situational framing - o When a researcher asks the respondent to recall a specific situation and then follows up by asking details about it. Examples: o Tell me about the day that you were married, start with the morning. o Tell me about the day that you graduated from college. Who attended your graduation? Decomposition - o When a researcher asks about several specific events and then adds them up. o Example of Survey Question: Approximately how many days during the past month did you consume an alcoholic beverage (wine, beer, or liquor)? o Example of Decomposition Survey Question: o During the past month, approximately how many days did you consume an alcoholic beverage (wine, beer, or liquor) at the following locations? Landmark anchoring - o The researcher asks the respondent whether something occurred before or after a major event Examples: o Did that happen before or after you graduated from college? o Did you lose your job before or after 9/11? o Did you move to Los Angeles before or after the birth of your first child? Bounded recall - o The researcher asks the respondent about events that occurred since the last interview. Examples: o We last talked two years ago; since that time, what jobs have you held? o Since the last time we spoke one month ago, have you been able to find a job? o During our last interview you mentioned that you were applying to graduate school. Did you end up applying? Techniques to increase honest answers - Create comfort and trust Use enhanced phrasing Establish a desensitizing context

Establish a desensitizing context - o The researcher asks about behaviors that are more serious than ones he or she is really interested in. Example: A respondent may hesitate to answer a question about shoplifting, but if it follows a list of questions about much more serious crimes (robbery, burglary, etc.), it will appear less serious. This increases the chances it will be answered honestly. Social Desirability Bias -

  • A problem in survey research in which respondents gives a "normative"response or a socially acceptable answer rather than an honest answer.
  • People tend to overstate being highly cultured, that they give money to charity, have a good marriage, voted in the last election, etc. Reducing Social Desirability Bias - o Phrase the questions in a way that make a norm violation appear less objectionable and provide "face saving" alternatives. o Example: Did you vote in the last presidential election? a. I did not vote in the November 6th election. b. I thought about voting but did not vote. c. I usually vote but did not get a chance to this time. d. Yes, I am sure that I voted in the November 6th election. Knowledge Questions - o Studies suggest that a large majority of the public cannot correctly answer elementary geography questions or name their elected leaders. o important because they address the basis on which people make judgments and form opinions. They tell us whether people are forming opinions on inaccurate information. Example: August 2009 Pew Research Poll
  1. Some critics of health care reform legislation say it includes the creation of so called "death panels" or government organizations that will make decisions about who will and will not receive health services when they are critically ill. How much, if anything, have you heard about this? o A lot 41% o A little 45% o Nothing at all 13%
  2. From what you know, do you think it is true or not true that the health care legislation will create these so called "death panels"? o True 30% o Not true 50% o Unsure 20% Contingency Questions - o a two part survey question in which a respondent's answer to the first question directs him or her to either the next question or to a more specific follow up question. Swayed opinion - falsely overstating a position (social desirability bias) or understating a position (sensitive topics)

Contingency Question - In the past year have you ridden in the backseat of a car? o No (skip to next question) o Yes → How often did you wear a seatbelt while riding in the backseat (Always, Sometimes, Never) o During pilot testing, researchers learned that many respondents who either: never rode in the backseat and those who rode in the backseat but never used a seatbelt both answered "never." The non- contingency question was ambiguous Open-Ended Questions - o How old are you? _________ Closed-Ended Questions - o How old are you? a. 18-24 d. 40- b. 25-29 e. 50- c. 30-39 f. 60 or older partially open question - What is your race/ethnicity? a. African American/Black c. Hispanic/Latino b. Asian/Pacific Islander d. Native American Indian/Alaskan Native e. Other (please specify): ________________________ False Positive - when a respondent selects an attitude position when he or she lacks any knowledge of the topic. False Negative - When a respondent refuses to answer a question or withholds information when he or she actually has information/holds opinion. Wording Effects - When the use of a specific word strongly influences how some respondents answer a survey question. Recency Effect - o In survey research when respondents are more likely to remember the last answer response on a list, and circle that answer rather than taking the question seriously. This occurs more often when surveys have many response categories. Selective Refusals - o When respondents refuse to answer certain questions (often those on sensitive topics). This can throw off results. Example: o In 1992 more than 1/3 of Americans refused to answer a sensitive question about racial integration. o If the respondents who opposed racial integration answer "don't know" the results appear more favorable to integration than they actually are. o After adjusting for non-responses, researchers found that the percentage of Americans who favored racial integration dropped from 49% to 35%

Order Effects - A problem in research design when the results of the study are attributed to the sequence of tasks in the experiment rather than to the independent variable. Organization of questionnaire - b. Questions should be put in an order that minimizes discomfort c. Opening questions should be easy to answer, interesting and pleasant. d. Place questions on the same topic together and introduce with a short statement (ex. "Now I would like to ask you some questions about healthcare") Context Effects - occur when the respondent is influenced by the interview setting/interviewer (for face-to- face), and the location (home, office, classroom, etc. where the survey is filled out) Non Response rates have five components -

  1. Location
  2. Eligibility
  3. Cooperation 4.Completion Ways to deal with context effects: -
  4. Use a funnel sequence of questions - ask more general questions before more specific ones.
  5. Divide the respondents in half, give one group the questions in a certain order, and the second group the questions in a different order. Then examine the results to see if it made a difference. Improving Survey Response Rate -
  6. Improving location 2.Contact rate
  7. Eligibility rate
  8. Cooperation rate Why have non-response rates increased over the past 50 years? - --People believe there are too many surveys --Hectic lifestyle, not enough time --Fear of strangers, distrust of authority --Misuse of survey results to sell products Improving Location - Make many repeat calls, vary time of day, lengthen the period to make contact Contact Rate - Cooperation increases when a respondent believes that the survey topic are salient to him or her Eligibility Rate - -Careful respondent screening, using better sample-frame definitions, and having multilingual interviewers.

-Send letters in advance of interview, offer to reschedule, use small incentives. Cooperation Rate - Cooperation among inner city, low income persons have increased when researchers use journalist style letters and personal phone calls rather than academic style records. Self-Administered Questionnaires Advantages - Quick, Easy, Cheap Social Exchange Theory - o views the survey as a special type of interaction. A respondent behaves based on what he or she expects to receive in return for cooperation. To increase response rates and accuracy, researchers need to minimize the burdens of cooperating by making participation very easy. Leverage Saliency Theory - o holds that the salience or interest/ motivation varies by respondent. Different people value, either positively or negatively, specific aspects of the survey process differently (length of time, topic of survey, sponsor). To maximize survey cooperation, researchers need to tailor the introduction and survey process to the respondent. 10 Ways to Increase Mail Questionnaire Response - o Address the questionnaire to a specific person, not "occupant," and send it first class mail. o Include a carefully written, dated cover letter on letterhead stationary. In it, request respondent cooperation, guarantee confidentially, explain the purpose of the survey and give the researcher's name and phone #. o Always include a postage-paid, addressed return envelope. o The questionnaire should have a neat, attractive layout and reasonable page length. o The questionnaire should be professionally printed, be easy to read, and have clear instructions. o Send a follow-up reminder to those not responding. o Do not send questionnaires during major holiday periods. o Do not put questions on the back page. Instead leave a blank space and ask respondent for general comments. o Sponsors that are local and are seen as legitimate (government agencies, universities, large firms) get better response rates. o Include a small monetary inducement ($1) if possible. Self-Administered Questionnaires Disadvantages - Very difficult to get a representative sample Introduction and Entry - the interviewer gets in the door, shows authorization, and reassures and secures cooperation from the respondent. Overall Costs -

  • For every $1 mail survey, telephone interview is $5 and a face-to-face interview is $
  • Biggest expenses are labor costs for professional staff who develop and pilot test a questionnaire, costs to train interviewer and labor costs of clerical staff.

Surveys of Organizations - o (businesses, schools, non-profit organizations). o You should write a personal letter, explain the purpose of the survey, confidentiality, use of results, and any benefit to the organization for participating. o Example: U.S. News & World Report Survey of College and Universities Time Budget Surveys - o A specialized type of survey in which respondents record details about o the timing and duration of their activities over a period of time. Example: Professors who work at a university were asked to be part of a o Time Budget survey given by government officials who wanted to learn o how much time professors devote to academic work activities. On o average professors worked 60 hrs/week. The Role of the Interviewer - o The interviewer must obtain cooperation and build rapport, yet remain objective and neutral. o The interviewer is encroaching on the respondents' time and privacy for information that may not directly benefit the respondents. o The interviewer must try to reduce embarrassment, fear, and suspicion so that the respondents feel comfortable revealing information. o Survey interviewers are non-judgmental and do not reveal their opinions, verbally or non-verbally. Example: If the respondent asks "what do you think" the interview may answer, "We are interested in what you think, what I think doesn't matter." Probe - A follow-up question in survey research interviewing that asks a respondent to clarify or elaborate on an incomplete or inappropriate answer Main Part of the Interview - o The interviewer uses the exact wording on the questionnaire and goes at a comfortable pace Unintentional errors or interviewer sloppiness - contacting the wrong respondent, misreading a question, omitting questions, reading questions in the wrong order, recording the wrong answer to a question, or misunderstanding the respondent. Types of probes -

  1. A 3-5 second pause is often effective
  2. Nonverbal communication (tilt of head, raised eyebrows, eye contact)
  3. "Any other reasons?" "Can you tell me more about that?" "Could you give me an example?" Interview Exit -
  • The interviewer thanks the respondent and leaves
  • The interviewer then goes to a quiet, private place to edit the questionnaire and record other details such as the date, time, and place of the interview.
  • The interviewer may record the respondents attitude (serious, angry, laughing, etc)
  • The interviewer may record unusual circumstances (telephone rang, teenage son entered room, volume on television loud) 6 Categories of Interview Bias -
    1. Errors by the respondent
  1. Unintentional errors or interviewer sloppiness
  2. Intentional subversion by the interviewer
  3. Influence due to the interviewer's expectations of the respondent
  4. Failure of an interviewer to probe or to probe properly
  5. Influence on the answers due to the interviewer's appearance Errors by the respondent - forgetting, embarrassment, misunderstanding, or lying because of the presence of others. Example: In interviews where married women were interviewed alone they were more likely to express that they do more housework than if their husband was present. Failure of an interviewer to probe or to probe properly - Intentional subversion by the interviewer - purposeful alteration of answers, omission or rewording of questions, or choice of an alternative respondent Influence due to the interviewer's expectations of the respondent - the interviewer has expectations based on the respondent's appearance, living situation, other answers Influence on the answers due to the interviewer's appearance - the interviewers appearance, tone, attitude, reactions to answers, or comments made outside of the interview Pilot Test - When researchers distribute the survey to a small sample in an effort to correct any problems there may be in interpretation, format of the survey, etc. Interview Context - o Words can have different meanings and implications depending on: the social situation, who speaks the words, how they are spoken, and the social distance between listener and speaker. Researchers have two models for interviewing - (Positivist) The idea that the questions should be worded and asked the same exact way in every single interview, so as to avoid bias. It also creates consistency and reliability in the measure. Critics refer to the standard approach as the "naïve assumption model." They argue that it falsely assumes that in interviews there are no communication problems between the interviewer and respondent, and that words can have different meaning based on social context.

Collaborative Encounter Model - (critical & interpretive approach) Views human encounters as highly dynamic, complex mutual interactions in which even minor forms of feedback (saying "hmm", smiling, nodding, body language, etc) can have an influence. They point to research that has shown that if an interviewer re-words a question it can increase reliability and validity of the answer. Think aloud interviews - a respondent explains his or her thinking out loud during the process of answering each question. Cognitive Interviewing - A technique used in pilot testing surveys in which researchers try to learn about a questionnaire and improve it by interviewing respondents about their thought processes or having respondents "think out loud" as they answer the survey questions. Methods for Improving Questionnaire with Pilot Tests -

  1. Think aloud interviews
  2. Retrospective interviews and targeted probes
  3. Expert evaluations
  4. Behavior coding
  5. Field experiments
  6. Vignettes and debriefing Retrospective interviews and targeted probes - after completing a questionnaire, the respondent explains to the researchers the process used to select each response or answer. Expert evaluations - An independent panel of experienced survey researchers reviews and critiques the questionnaire. Behavior coding - Researchers closely monitor interviews, often using audio or videotapes, for misstatements, hesitations, missed instructions, non-response, refusals, puzzled looks, answers that do not fit response categories. Field experiments - researchers administer alternative forms of the questionnaire items in field settings and compare results. Manifest Coding - (Quantitative Coding) : A type of coding in which a researcher first develops a list of words, phrases, or symbols and then locates them in a communication medium. Vignettes and debriefing -

Interviewers and respondents are presented with short, invented "lifelike" situations and asked which questionnaire response category they would use. The Ethical Survey -

  1. Respondents have a right to privacy
  • Respondents have a right to decide when and to whom to reveal personal information.
  • Researchers should treat all respondents with dignity, reduce discomfort, and protect the confidentiality of survey data.
  1. Respondents participation must be voluntary
  • Participants must voluntarily take the survey.
  • In face-to-face interviews "informed consent" is required.
  1. Surveys cannot be used to mislead the respondents or others.
  • Pseudosurvey: a type of survey used in an attempt to persuade someone to do something and has little or no real interest in learning something. The survey is used as a guise to gain entry into homes, invade privacy, or sell something. Pseudosurvey - a type of survey used in an attempt to persuade someone to do something and has little or no real interest in learning something. The survey is used as a guise to gain entry into homes, invade privacy, or sell something. Example: In a 1994 U.S. election campaign, an unknown survey organization telephoned potential voters and asked whether the voter supported a given candidate. If the voter supported the candidate the interviewer asked whether the respondent would still support the candidate if he or she knew that the candidate had used illegal drugs or been arrested for drunk driving. Prestige Bias -
  • When questions include terms about a highly prestigious person, group, or institution and a respondent's feelings toward the person/group overshadows how he or she answers the question. Example: "Kobe Bryant thinks that Mayor Garcetti is doing an excellent job, what do you think?" Leading Question -
  • can lead respondents to either positive or negative answers.
  • Example: "You don't smoke, do you?" Social Desirability Bias Def & Ex -
  • A problem in survey research in which respondents gives a "normative" response or a socially acceptable answer rather than an honest answer. Example: Did you vote in the last presidential election? (Check text for example) a. I did not vote in the November 6th election. b. I thought about voting but did not vote. c. I usually vote but did not get a chance to this time. d. Yes, I am sure that I voted in the November 6th election. Double-Barreled Question def & ex -
  • Two or more questions in one which causes ambiguity, you can not be sure of the respondent's intention.

Example: "Does your employer offer pension and health Insurance benefits?" Social Desirability Bias def & ex -

  • A problem in survey research in which respondents gives a "normative" response or a socially acceptable answer rather than an honest answer. Example: Did you vote in the last presidential election? (Check text for example) a. I did not vote in the November 6th election. b. I thought about voting but did not vote. c. I usually vote but did not get a chance to this time. d. Yes, I am sure that I voted in the November 6th election. Situational Framing -
  • When a researcher asks the respondent to recall a specific situation and then follows up by asking details about it. Example: Tell me about the day that you were married, start with the morning. Non-Reactive Research -
  • Content Analysis
  • Analysis of Existing Statistics
  • Secondary Data Analysis
  • Historical Comparative Research
  • Field Research (in cases where research only makes observations) Content Analysis - o A technique for gathering and analyzing the content of text (words, meanings, pictures, symbols, ideas, themes, or any message that can be communicated). Examples: books, newspaper or magazine articles, advertisements, o speeches, official documents, films/videos/TV, musical lyrics, photographs, o articles of clothing, web sites, works of art, etc. Analysis of Existing Statistics - o (ex. crime statistics, birth, death, marriage records, labor statistics, etc.) Secondary Data Analysis - o when researchers analyze survey data collected by another researcher (ex. General Social Survey data) Historical Comparative Research - o when researchers compare entire cultures or societies to learn about macro patterns and long term trends Reactive Research -
  • A type of social research in which people are aware that they are being studied.
  • Experiments
  • Survey Research
  • Qualitative Interviews
  • Field Research (in cases where researchers interacts with people in the field) Examples of Non-Reactive or Unobtrusive Measures -
  1. Physical Traces
  2. Archives
  3. Observation Physical Traces -
  • Erosion Measures
  • Accretion Measures Erosion Measures - o Non reactive measure of the wear or deterioration on surfaces due to the activity of people Example: A researcher examines children's toys at a day care that were purchased at the same time. Worn-out toys suggest greater interest by the children. Accretion Measures - o Non reactive measure of the residue of the activity of people or what they leave behind. Example: Studying photographs that have been left behind from certain historical eras to see how gender relations within the family are reflected in seating patterns. Archives - (Public records) o A researcher can examine marriage records for the bride and groom's ages, birth and death rates, tax registries, average home prices, etc.
  • Diaries, Letters, Correspondence o A researcher can go back and look at personal diaries, letters, correspondence, logs of family owned business, to see what life was like during a particular historical era.
  • Photographs o A researcher can examine photographs to determine Reliability of Manifest Coding - Is high because computers can count whether or not words appear in the text or not. Observation -
  • External Appearance o A research watches students to see whether they are more likely to wear their school's colors and symbols after the school team has won or lost.
  • Count Behaviors o A researcher counts the number of men and women who come to a full stop at a stop sign, and those who come to a rolling stop. This may suggest gender differences in driving.
  • Time Duration o A researcher can measure how long men and women pause in front of a painting of a nude man and in front of a painting of a nude woman. Time may indicate embarrassment, interest, curiosity, etc. Quantitative Content Analysis - o often focus on defining concepts, constructing variables, and measuring those variables. Example: How many print ads in a magazine feature beauty products, automobiles, or clothing. The researcher can count

the representation of these images and analyze the numbers using statistics. Advertisements 26% clothing 10% jewelry 41% cosmetics/skin care 23% shoes Qualitative Content Analysis - o qualitative researchers often use interpretive or critical approaches to study documents and reports. Each image is a cultural object and carries symbolic social meaning. Example: In an examination of print ads in a magazine, what themes come across. Themes Beauty can bring you power and happiness Owning a certain type of car makes you more masculine, attracts women to you, and indicates you are virile. Coding System - o A set of instructions or rules used in content analysis to explain how a researcher systematically converted the symbolic content from text into quantitative data. Structured Observation - o A method of watching what is happening in a social setting that is highly organized and follows systematic rules for observation and documentation. In this case of course the researcher is observing the text that he/she is analyzing. Inferences - o The inferences a researcher can make based on the results is critical. In content analysis, inferences cannot reveal the intentions of those who created the text or the effects that messages have on those who receive them. Example: A content analysis shows that children's books contain gender stereotypes. That does not necessarily mean that the stereotypes in the books shape the beliefs or behaviors of children; you would need a separate study in order to make that inference. Reliability of Latent Coding - Tends to be lower since it depends on the coder's knowledge of language, image, social meaning, etc. Training and practice however can increase reliability. Validity of Manifest Coding - Can be harder to achieve because o computer programs can not interpret their meaning - o (ex. red ink, red hot, red fire truck, red herring, Red scare, etc.).C o Example: A researcher counts the number of times the word "red" appears in written text. Latent Coding - o (Qualitative Coding): A type of coding in which a researcher identifies subjective meaning such as themes or motifs and then systematically locates them in a communication medium.

Validity of Latent Coding - Is usually high since people communicate meaning in many implicit ways that depend on context, not just specific words. o Example: A researcher examines websites dedicated to covering foreign political issues and identify themes, images, symbolic meanings. Intercoder Reliability - o Equivalence reliability in content analysis which identifies the degree of consistency among coders using a statistical coefficient (Krippendorff's Alpha) Secondary Analysis of Survey Data - a special case of existing statistics. The researcher statistically analyzes survey data originally gathered by another researcher. ICPSR (Inter-University Consortium for Political and Social Research) - o It is the world's major archive of survey research data with over 17,000 data sets. o Data sets are made available to researchers at modest costs. NORC (National Opinion Research Center) - o Has collected data for the General Social Survey (GSS) almost every year since 1972. o Researchers survey a representative sample of 1,500 U.S. residents. o Face-to-face survey interviews are conducted in people's homes. o The NORC staff carefully select and recruit a diverse group of interviewers. o Interviewers are race-matched with respondents. o Interviews contain 500 questions and last 90 min. o The GSS datasets are publicly available for free. Limitations of Existing Statistics and Secondary Data - o The secondary data or existing statistics may be inappropriate for your research question. o The researcher may not understand the topic well enough and interpret the data incorrectly. o Fallacy of misplaced concreteness o The researcher can not find the appropriate unit of analysis The secondary data or existing statistics may be inappropriate for your research question. - You want to examine racial-ethnic tensions between Hispanics and Whites across the U.S. but only have secondary data that includes states in the Western part of the U.S. The researcher may not understand the topic well enough and interpret the data incorrectly. - The researcher uses data on high school graduation rates in Germany but does not know much about the German secondary education system and makes serious errors in interpreting the results. Fallacy of misplaced concreteness - occurs when someone gives a false impression of precision by quoting statistics in more detail than warranted. Example: From GSS data you calculate the percentage of divorced people is 15.65495 in order appear more scientific. It is much better to report approximately 15.7% of the population is currently divorced.

The researcher can not find the appropriate unit of analysis - A common problem in existing statistics is finding appropriate units of analysis. Many statistics are published for aggregates, not the individual Example: The researcher's question is: are unemployed people more likely to commit property crimes? The potential for committing the ecological fallacy (using larger units of analysis to draw conclusions about smaller units of analysis) would occur in this situation. Since you have aggregate data (data on the population) and you do not have data on individuals, in this case there is no way the researcher could measure if individual people who are unemployed are also the same individual people committing property crimes. Problems with Validity - o Definitions don't match o The researcher relies on official statistics as a proxy (replacement) for a construct and the official statistics have issues. o The researcher lacks control over how the information was collected. Definitions don't match - o A researcher's definition of a construct does not match the definition used by the government agency or organization that collected the data. Example: definition of unemployment: people who work if a good job were available, people who have to work part-time but want to work full-time, and those who have given up on looking for work. Official definition of unemployment: includes only those who are actively seeking work (full-time or part-time). The researcher relies on official statistics as a proxy (replacement) for a construct and the official statistics have issues. - Example: You want to know how many people are victims of hate crimes, so you use police statistics on hate crime as a proxy. This measure is not entirely valid because many victims do not report hate crime to the police. Official statistics do not always reveal all that occurred. The researcher lacks control over how the information was collected - A university researcher re-examined the methods used by the U.S. Bureau of Labor Statistics and found an error. Data on permanent job losses came from a survey of 50,000 people, but the government agency failed to adjust for a high non-response rate. Corrected figures showed a decline in the number of people laid off between 1993-1996 when the original report showed no change. Problems with Reliability - o Official definitions or the method of collecting information changes over time. o Equivalence reliability issues o Representative reliability issues. Official definitions or the method of collecting information changes over time. -

In the 1980's the method for calculating the U.S. unemployment rate changed. Old method (# of unemployed persons/# of persons in civilian workforce). New method (# of unemployed persons/# of persons in civilian workforce and military). Equivalence reliability issues - o The researcher realizes the measure yields inconsistent results across different indicators. Example: A measure of crime across a nation depends on each police department providing accurate info. Studies of police departments suggest that political pressures to increase arrests (to show dept is tough on crime) or decrease arrests (to lower crime rates prior to an election). Representative reliability issues. - o The researcher realizes that an indicator delivered inconsistent measures across a subpopulation. Example: The U.S. Bureau of Labor statistics found a 0.6 increase in female unemployment after it started using gender-neutral measurement procedures. Until the mid 1990's interviews commonly asked women if they had been "keeping house." If they responded "yes" they were recorded as housewives and not as part of the unemployed. Problems with Missing Data - Sometimes the data were collected but lost, more frequently the data were never collected. o Government agencies start and stop collecting information for political, budgetary, or other reasons. During the early 1980's cost-cutting measures by the U.S. federal government stopped the collection of information that social researchers found valuable. Government has recorded work stoppages and strikes in the U.S. since the 1890's. However 5 years of data are missing from 1912-1916 because the government stopped collecting data during those years. Ethical Concerns - -Privacy and Confidentiality -Official statistics being used as social and political products (for both progressive and conservative causes). Privacy and Confidentiality - Ethical concerns are not at the forefront of most nonreactive research because the people you study are not directly involved. It is very important for the researcher to maintain the privacy and confidentiality of participants when using data someone else has collected. Official statistics being used as social and political products (for both progressive and conservative causes). - Example: Official statistics collected by the government can stimulate public attention. Political activism can also lead government to collect data. Drunk driving became a public issue only after government agencies began collecting statistics on the number of automobile accidents in which alcohol was a factor. Activism pushed for data on then number of patients who die in public mental hospitals to be collected and the number of non-White students enrolled in U.S. public schools. Example: Some activist groups have argued to end the collection of certain types of data.

Conservative activists have pushed for police departments not to collect data on the race/ethnicity of persons pulled over while driving. Some libertarian groups have pushed for the U.S. Census Bureau to stop collecting data Uecker et al (2007): Losing my religion: The social sources of religious decline in early adulthood - Researcher used data from the National Longitudinal Study of Adolescent Health (a school- based three-part panel survey on health and related social behaviors) The researchers were interested in explaining declines in religious involvement that occur as adolescents move into adulthood. DV: religious involvement, importance of religion in one's life, feelings about organized religion. IV: college attendance, cohabitation, non-marital sex, drug and alcohol use Findings: Researchers found that people who went to college remained as religious as those who did not go, which runs contrary to popular belief. Increased drug/alcohol use and pre-marital sex were correlated with reduced importance of religion in one's life. Lieberson et al (2000): The instability of androgynous names: The symbolic maintenance of gender boundaries - An androgynous first name is one that can be for either a girl or boy without clearly marking the child's gender. Some argue the feminist movement decreased gender marking in a child's name. Researchers examined existing statistical data in the form of computerized records from the birth certificates of 11 million births of White children in Illinois from 1916-1989. Findings They found that androgynous first names are rare (3%) and that there has been a very slight historical trend toward androgyny. Parents are more likely to give androgynous names to girls than to boys. Androgynous names are unstable Behm-Morawitz and Mastro (2008): Mean girls? The influence of gender portrayals in teen movies on emerging adults' gender based attitudes and beliefs. - Mean Girls was a 2004 movie about teen girls who obtain rewards and feel pleasure by being socially aggressive. The researchers searched the internet to identify 90 U.S. teen films released between 1995-2005 and picked the 20 with the highest box office sales. They trained staff to code the following: socially cooperative behavior, socially aggressive behavior, and positive and negative consequences of behavior. In a statistical analysis the researchers found that both males and females were more often rewarded than punished for behaving in social aggression, with females significantly more likely to be rewarded. The researchers later interviewed college undergraduates and found that those who watched the most teen movies and most identified with characters indicated they were more likely to believe that social aggression is rewarded with increased popularity among peers.