Study with the several resources on Docsity
Earn points by helping other students or get them with a premium plan
Prepare for your exams
Study with the several resources on Docsity
Earn points to download
Earn points by helping other students or get them with a premium plan
Community
Ask the community for help and clear up your study doubts
Discover the best universities in your country according to Docsity users
Free resources
Download our free guides on studying techniques, anxiety management strategies, and thesis advice from Docsity tutors
An in-depth exploration of quantitative research methods in the social sciences, focusing on experiment design, sampling techniques, and potential sources of bias. Key topics include the importance of proper study design, the use of mild deception, population generalization, and the role of pre-tests, pilot tests, and debriefing. The document also covers errors in sampling, leading questions, double-barreled questions, order effects, cooperation rate, interview context, social desirability bias, and non-reactive measures. It also discusses the chicago school phase i, theoretical sampling, and types of questions asked in field research interviews.
Typology: Exams
1 / 40
Four trends that sped the expansion of the experiment - o Behaviorism o Quantification o Treating Subjects Anonymously o Use of experimental methods for applied purposes Behaviorism - o a school of psychology founded in the 1920's that emphasized measuring observable behavior or outcomes of mental life and advocated the experimental method for conducting rigorous empirical tests of hypotheses. Quantification - o Measuring social phenomena with numbers. Researchers began quantifying social constructs such as (spirit, consciousness, and will). Scales and indexes were developed to measure abstract concepts. Example: Use of IQ tests to quantify intelligence Treating Subjects Anonymously - o Early social research reports contained then names of the specific individuals who participated, most were other professional researchers. o Later research treated participants anonymously. o Over time there was a shift to use college students and school children as research participants. o Over time the relationship between the experimenters and participants became more detached, remote and objective. Use of experimental methods for applied purposes - o Early social research reports contained then names of the specific individuals who participated, most were other professional researchers. o Later research treated participants anonymously. o Over time there was a shift to use college students and school children as research participants. o Over time the relationship between the experimenters and participants became more detached, remote and objective. Subjects - o Participants in experimental research o used instead of participants that would be found in qualitative experiments Random Assignment - o Participants are randomly assigned to either the experimental or control group. Experimental Group - The participants who receive the treatment.
Control Group - The participants who do not receive the treatment. Pre-Test - o A test that measures the dependent variable of an experiment prior to the treatment. o Ex: before the study takes place you can ask how often they smoke, what their heath is like, etc Independent Variable (Stimulus/Treatment) - o The independent variable in experimental research. Dependent Variable - o The outcome of the experiment. Post-Test - o A test that measures the dependent variable of an experiment after the treatment. o you would essentially ask the same questions as the pre test Classical Design - Random Assignment: Yes Pretest: Yes Control Group: Yes Experimental Group: Yes Deception - o A lie by an experimenter to participants about the true nature of an experiment or the creation of a false impression through his or her actions or the setting. o Giving the subjects a sugar pill Confederate - o A person working for the experimenter who acts as another participant or in a role in front of participants to deceive them with an experimenter's cover story. o someone who is in on the experiment like in the shock study. Cover Story - o A type of deception in which the experimenter tells a false story to participants so that they will act as wanted and do not know the true nature of the study. One-shot case study - Random Assignment: No Pretest: No Posttest: Yes Control Group: No Experimental Group: Yes Pre-Experimental Designs - Experimental plans that lack random assignment or use shortcuts and are much weaker than the classical design. They are substituted in situations where an experimenter cannot use all of the features of a classical design.
Quasi-Experimental - Plans that are stronger than pre-experimental designs, but are still variations of the classical design. They are used in situations where the experimenter has limited control over the independent variable. Interrupted Time Series - o An experimental plan in which the dependent o variable is measured periodically across many time points and the treatment o occurs in the midst of such measures, often only once. Equivalent Time Series - o An experimental plan in which there are several repeated pretests, posttest, o and treatments for one group often over a period of time. Latin Square - o An experimental plan to examine whether the order or sequence in which participants receive versions of the treatment has an effect. Solomon Four-Group Design - o An experimental plan in which participants are randomly assigned to two control groups and two experimental groups; only one experimental and control group receive a pre-test; all four groups receive a post-test. This design is used to check for the testing effect. Factorial Designs - o an experimental plan that considers the impact of several independent variables simultaneously. one-group pretest-posttest - Random Assignment: No Pretest: Yes Posttest: Yes Control Group: No Experimental Group: Yes static group comparison - Random Assignment: No Pretest: No Posttest: Yes Control Group: Yes Experimental Group: Yes two group posttest only - Random Assignment: Yes Pretest: No Posttest: Yes Control Group: Yes Experimental Group: Yes time series design -
Random Assignment: No Pretest: Yes Posttest: Yes Control Group: No Experimental Group: Yes Maturation Effect* - Selection Bias - occurs when groups in an experiment are not equivalent at the beginning of the study. o Example: In an experiment on physical aggressiveness, the experimental group unintentionally contains o subjects who are football, rugby, and hockey players; and wrestlers whereas the control group is made up of musicians, chess players, ballet dancers and painters. History Effect - results from an unplanned event occurring which is outside the control of the experimenter. o Example: A study is being conducted on people's fears. Half-way through a two- o week experiment an airplane crashes in the city where the experiment is being conducted. Compensatory Behavior - when participants in the control group modify their behavior to make up for not getting the treatment. Ex: Testing Effect - occurs because the very process of conducting a pre-test can have an effect on the outcome (dependent variable). Example: a researcher gives students an examination (pre-test) on the first day of class. Half of the students in the class are then assigned to the experimental group and receive extra help from a tutoring program while students in the control group do not. On the last day of class if any of the students learned from the pre-test it could effect how they perform on the post-test. This can throw off the results because the researcher does not know if the treatment (tutoring program) or exposure to the pre- test, or a mix of both is why students performance improved on the post-test Instrumentation - occurs when the instrument changes during the experiment. o Example: if a video camera breaks, recording device does not work properly, one of the observers gets sick and must be replaced by another, etc... Experimental Mortality - when some research participants do not continue throughout the entire experiment o Example: You begin a weight-loss study with 60 people. At the end of the program only 40 remain, each of whom lost 5 pounds. The twenty who left could have had different results than the 40 who stayed. Statistical Regression (Regression toward the mean) -
o a threat to internal validity from measurement instruments providing extreme values and a tendency for random errors to move extreme results toward the average. o Example: In an experiment if you happen to have individuals who do extremely well or extremely bad on a pretest (score really high, or really low) it is likely that their scores will get closer to the mean after the treatment when they are measured on the posttest. Diffusion of Treatment - when the treatment "spills over" from the experimental group and control group and participants modify their behavior because they have learned about the treatment. o Example: Subjects participate in a day long experiment on a new way to memorize words. During a break, experimental and control group subjects cross paths in the bathroom and begin talking. Double-Blind Study - a design intended to control experimenter expectancy. In this type of experiment the only people who have direct contact with participants do not know the details of the hypothesis or treatment. Experimenter Expectancy - o Once the inequality is known by another school system (the control group) participants in the control group work extra hard to prepare for the SATs to overcome the inequality Example: A researcher studying reactions toward those with disabilities deeply believes that females are more sensitive toward those with disabilities than males are. o Through eye contact, tone of voice, pauses, and other nonverbal communication, the researcher unconsciously encourages female research participants to report positive feelings toward those with disabilities. 10 things to avoid when constructing a survey - Jargon (technical terms): Ex. Plumbers talk about snakes; lawyers about uberrima fides; and psychologists about oedipus complex. Abbreviations: Ex. MSS can stand for several things: Manufacturers Standardization Society, Marine Systems Simulator, Medical Student Society, etc. Slang: Ex. bonkers (crazy), pad (apartment), munchies Demand Characteristics - o a threat to internal validity that occurs when research participants pick up clues about the hypothesis or an experiment's purpose and modify their behavior to what they think the research demands of them (ex. supporting hypothesis) o It is important for researchers to properly design the study and use mild deception as needed to protect against demand characteristics. Placebo Effect - o occurs when participants do not receive the real treatment but receive a non-active or imitation treatment but respond as though they have received the real treatment. o Example: You create an experiment to help heavy smokers stop smoking. You give the experimental group a pill to reduce nicotine dependence. o The control group gets a placebo (empty pill).
o If participants who received the placebo also stop smoking, then merely participating in the experiment and taking something that they believed would help them quit smoking had an effect. Population generalization - o When researchers can accurately generalize from what they learn in an experiment to a population. o Example: An experiment is conducted on 100 undergraduates at a o university. o To whom can the researcher generalize the findings? All undergraduate students at that university who attended that year? All college students in the same country? All people in society? o To improve the population generalization form of external validity in an experiment, researchers draw a random sample from the population and conduct the experiment with the sampled individuals. Naturalistic generalization - o whether or not a researcher can generalize accurately from what was learned in an artificially created controlled laboratory setting to real life natural settings. Mundane Realism - o asks whether an experiment or a situation is like the real world. o Example: You have children play with toys in a laboratory with 4 white walls. Mundane realism would be stronger if you created a real-life play area. Experimental Realism - o the impact of an experimental treatment or setting on people; it occurs when participants are caught up in the experiment and are truly influenced by it. It remains weak if subjects remain unaffected and the experiment has little impact on them. o Example: Stanford Prison Experiment (1971) Participants were randomly divided into 2 groups (prisoners and guards) and were put in a mock prison. Caught up in the experiment, the guards enforced authoritarian tactics, many prisoners accepted psychological abuse. Experiment was called off after 6 days because experimenters feared violence. Theoretical generalization - o asks whether the researcher can accurately generalize from an abstract theory that he or she is testing from a set of measures in the experiment. o It depends on: o Whether the experiment has strong experimental realism o Measurement Validity (how well variables are measuring what the researcher intends to measure) o Whether the researcher is able to control confounding variables (variables that could damage the internal validity of the experiment) Threats to External Validity -
Reactivity (Hawthorne effect) - o A reactivity result named after a famous case. o A series of experiments were conducted in the Westinghouse o Electric plant in Hawthorne, Illinois during the 1920's. o Researchers modified many aspects of working conditions (lighting, time for breaks, etc) and measured worker productivity. o They discovered productivity rose after each modification no matter what it was. o Workers did not respond to the treatment, but instead responded to the fact that they were being watched. Manipulation Check -
External validity - what is happening around you
document social problems and advance policies that would help unemployed workers and promoted racial equality. After the war many researchers left government and returned to universities where they founded new social research organizations: National Opinion Research Center (University of Chicago) est. 1947 Institute for Survey Research (University of Michigan) est. 1949 o At first traditional social researchers were wary of quantitative research and skeptical of bringing a technique popular within private industry into the university. Errors in selecting respondents - o Sampling Errors (using a non-probability sample) o Coverage errors (poor sampling frame, omits certain groups of people o Non-response errors at the level of a sampled unit (respondent refuses to answer) Errors in responding to survey questions - o Non-response errors specific to a survey item (certain questions are skipped or ignored) o Measurement errors caused by respondent (ex. respondent does not listen carefully to question in face-to-face or phone survey) o Measurement errors caused by interviewer (interviewer is sloppy in reading questions or recording answers) Survey administration errors - o Post-survey errors (mistakes in cleaning the data or transferring data into electronic form. o Mode effects (differences due to survey method: mail, in person, internet) o Comparability errors (different survey organizations, nations, or time periods yield different data for the same respondents on the same issues) Leading (loaded) questions - can lead respondents to either positive or negative answers Double-Barreled Question - Two or more questions in one which causes ambiguity, you can not be sure of the respondent's intention. Fix: Ask two separate questions, one about pension benefits and one about health insurance Non Contingency Question - In the past year, how often have you used a seat belt when you have ridden in the backseat of a car? (Always, Sometimes, Never) false premise - occurs when a researcher begins a question with a premise that the respondent may disagree and then offer choices regarding the premise. This can irritate respondents and forces them into answer a question they may disagree. Factors that influence respondent recall - o The topic (is it threatening, embarrassing, socially desirable) o Timing of events (recalling events that occurred simultaneously or subsequently) o Significance of an event for a person
o The situational condition (question wording and interview style) Telescoping - o When respondents compress time when asked about past events. o Backward Telescoping (when they remember an event earlier than it actually occurred) o Forward Telescoping (when they remember an event after it actually occurred) Techniques to reduce telescoping - Situational framing Decomposition Landmark anchoring Bounded recall Situational framing - o When a researcher asks the respondent to recall a specific situation and then follows up by asking details about it. Examples: o Tell me about the day that you were married, start with the morning. o Tell me about the day that you graduated from college. Who attended your graduation? Decomposition - o When a researcher asks about several specific events and then adds them up. o Example of Survey Question: Approximately how many days during the past month did you consume an alcoholic beverage (wine, beer, or liquor)? o Example of Decomposition Survey Question: o During the past month, approximately how many days did you consume an alcoholic beverage (wine, beer, or liquor) at the following locations? Landmark anchoring - o The researcher asks the respondent whether something occurred before or after a major event Examples: o Did that happen before or after you graduated from college? o Did you lose your job before or after 9/11? o Did you move to Los Angeles before or after the birth of your first child? Bounded recall - o The researcher asks the respondent about events that occurred since the last interview. Examples: o We last talked two years ago; since that time, what jobs have you held? o Since the last time we spoke one month ago, have you been able to find a job? o During our last interview you mentioned that you were applying to graduate school. Did you end up applying? Techniques to increase honest answers - Create comfort and trust Use enhanced phrasing Establish a desensitizing context
Establish a desensitizing context - o The researcher asks about behaviors that are more serious than ones he or she is really interested in. Example: A respondent may hesitate to answer a question about shoplifting, but if it follows a list of questions about much more serious crimes (robbery, burglary, etc.), it will appear less serious. This increases the chances it will be answered honestly. Social Desirability Bias -
Contingency Question - In the past year have you ridden in the backseat of a car? o No (skip to next question) o Yes → How often did you wear a seatbelt while riding in the backseat (Always, Sometimes, Never) o During pilot testing, researchers learned that many respondents who either: never rode in the backseat and those who rode in the backseat but never used a seatbelt both answered "never." The non- contingency question was ambiguous Open-Ended Questions - o How old are you? _________ Closed-Ended Questions - o How old are you? a. 18-24 d. 40- b. 25-29 e. 50- c. 30-39 f. 60 or older partially open question - What is your race/ethnicity? a. African American/Black c. Hispanic/Latino b. Asian/Pacific Islander d. Native American Indian/Alaskan Native e. Other (please specify): ________________________ False Positive - when a respondent selects an attitude position when he or she lacks any knowledge of the topic. False Negative - When a respondent refuses to answer a question or withholds information when he or she actually has information/holds opinion. Wording Effects - When the use of a specific word strongly influences how some respondents answer a survey question. Recency Effect - o In survey research when respondents are more likely to remember the last answer response on a list, and circle that answer rather than taking the question seriously. This occurs more often when surveys have many response categories. Selective Refusals - o When respondents refuse to answer certain questions (often those on sensitive topics). This can throw off results. Example: o In 1992 more than 1/3 of Americans refused to answer a sensitive question about racial integration. o If the respondents who opposed racial integration answer "don't know" the results appear more favorable to integration than they actually are. o After adjusting for non-responses, researchers found that the percentage of Americans who favored racial integration dropped from 49% to 35%
Order Effects - A problem in research design when the results of the study are attributed to the sequence of tasks in the experiment rather than to the independent variable. Organization of questionnaire - b. Questions should be put in an order that minimizes discomfort c. Opening questions should be easy to answer, interesting and pleasant. d. Place questions on the same topic together and introduce with a short statement (ex. "Now I would like to ask you some questions about healthcare") Context Effects - occur when the respondent is influenced by the interview setting/interviewer (for face-to- face), and the location (home, office, classroom, etc. where the survey is filled out) Non Response rates have five components -
-Send letters in advance of interview, offer to reschedule, use small incentives. Cooperation Rate - Cooperation among inner city, low income persons have increased when researchers use journalist style letters and personal phone calls rather than academic style records. Self-Administered Questionnaires Advantages - Quick, Easy, Cheap Social Exchange Theory - o views the survey as a special type of interaction. A respondent behaves based on what he or she expects to receive in return for cooperation. To increase response rates and accuracy, researchers need to minimize the burdens of cooperating by making participation very easy. Leverage Saliency Theory - o holds that the salience or interest/ motivation varies by respondent. Different people value, either positively or negatively, specific aspects of the survey process differently (length of time, topic of survey, sponsor). To maximize survey cooperation, researchers need to tailor the introduction and survey process to the respondent. 10 Ways to Increase Mail Questionnaire Response - o Address the questionnaire to a specific person, not "occupant," and send it first class mail. o Include a carefully written, dated cover letter on letterhead stationary. In it, request respondent cooperation, guarantee confidentially, explain the purpose of the survey and give the researcher's name and phone #. o Always include a postage-paid, addressed return envelope. o The questionnaire should have a neat, attractive layout and reasonable page length. o The questionnaire should be professionally printed, be easy to read, and have clear instructions. o Send a follow-up reminder to those not responding. o Do not send questionnaires during major holiday periods. o Do not put questions on the back page. Instead leave a blank space and ask respondent for general comments. o Sponsors that are local and are seen as legitimate (government agencies, universities, large firms) get better response rates. o Include a small monetary inducement ($1) if possible. Self-Administered Questionnaires Disadvantages - Very difficult to get a representative sample Introduction and Entry - the interviewer gets in the door, shows authorization, and reassures and secures cooperation from the respondent. Overall Costs -
Surveys of Organizations - o (businesses, schools, non-profit organizations). o You should write a personal letter, explain the purpose of the survey, confidentiality, use of results, and any benefit to the organization for participating. o Example: U.S. News & World Report Survey of College and Universities Time Budget Surveys - o A specialized type of survey in which respondents record details about o the timing and duration of their activities over a period of time. Example: Professors who work at a university were asked to be part of a o Time Budget survey given by government officials who wanted to learn o how much time professors devote to academic work activities. On o average professors worked 60 hrs/week. The Role of the Interviewer - o The interviewer must obtain cooperation and build rapport, yet remain objective and neutral. o The interviewer is encroaching on the respondents' time and privacy for information that may not directly benefit the respondents. o The interviewer must try to reduce embarrassment, fear, and suspicion so that the respondents feel comfortable revealing information. o Survey interviewers are non-judgmental and do not reveal their opinions, verbally or non-verbally. Example: If the respondent asks "what do you think" the interview may answer, "We are interested in what you think, what I think doesn't matter." Probe - A follow-up question in survey research interviewing that asks a respondent to clarify or elaborate on an incomplete or inappropriate answer Main Part of the Interview - o The interviewer uses the exact wording on the questionnaire and goes at a comfortable pace Unintentional errors or interviewer sloppiness - contacting the wrong respondent, misreading a question, omitting questions, reading questions in the wrong order, recording the wrong answer to a question, or misunderstanding the respondent. Types of probes -
Collaborative Encounter Model - (critical & interpretive approach) Views human encounters as highly dynamic, complex mutual interactions in which even minor forms of feedback (saying "hmm", smiling, nodding, body language, etc) can have an influence. They point to research that has shown that if an interviewer re-words a question it can increase reliability and validity of the answer. Think aloud interviews - a respondent explains his or her thinking out loud during the process of answering each question. Cognitive Interviewing - A technique used in pilot testing surveys in which researchers try to learn about a questionnaire and improve it by interviewing respondents about their thought processes or having respondents "think out loud" as they answer the survey questions. Methods for Improving Questionnaire with Pilot Tests -
Interviewers and respondents are presented with short, invented "lifelike" situations and asked which questionnaire response category they would use. The Ethical Survey -
Example: "Does your employer offer pension and health Insurance benefits?" Social Desirability Bias def & ex -
the representation of these images and analyze the numbers using statistics. Advertisements 26% clothing 10% jewelry 41% cosmetics/skin care 23% shoes Qualitative Content Analysis - o qualitative researchers often use interpretive or critical approaches to study documents and reports. Each image is a cultural object and carries symbolic social meaning. Example: In an examination of print ads in a magazine, what themes come across. Themes Beauty can bring you power and happiness Owning a certain type of car makes you more masculine, attracts women to you, and indicates you are virile. Coding System - o A set of instructions or rules used in content analysis to explain how a researcher systematically converted the symbolic content from text into quantitative data. Structured Observation - o A method of watching what is happening in a social setting that is highly organized and follows systematic rules for observation and documentation. In this case of course the researcher is observing the text that he/she is analyzing. Inferences - o The inferences a researcher can make based on the results is critical. In content analysis, inferences cannot reveal the intentions of those who created the text or the effects that messages have on those who receive them. Example: A content analysis shows that children's books contain gender stereotypes. That does not necessarily mean that the stereotypes in the books shape the beliefs or behaviors of children; you would need a separate study in order to make that inference. Reliability of Latent Coding - Tends to be lower since it depends on the coder's knowledge of language, image, social meaning, etc. Training and practice however can increase reliability. Validity of Manifest Coding - Can be harder to achieve because o computer programs can not interpret their meaning - o (ex. red ink, red hot, red fire truck, red herring, Red scare, etc.).C o Example: A researcher counts the number of times the word "red" appears in written text. Latent Coding - o (Qualitative Coding): A type of coding in which a researcher identifies subjective meaning such as themes or motifs and then systematically locates them in a communication medium.
Validity of Latent Coding - Is usually high since people communicate meaning in many implicit ways that depend on context, not just specific words. o Example: A researcher examines websites dedicated to covering foreign political issues and identify themes, images, symbolic meanings. Intercoder Reliability - o Equivalence reliability in content analysis which identifies the degree of consistency among coders using a statistical coefficient (Krippendorff's Alpha) Secondary Analysis of Survey Data - a special case of existing statistics. The researcher statistically analyzes survey data originally gathered by another researcher. ICPSR (Inter-University Consortium for Political and Social Research) - o It is the world's major archive of survey research data with over 17,000 data sets. o Data sets are made available to researchers at modest costs. NORC (National Opinion Research Center) - o Has collected data for the General Social Survey (GSS) almost every year since 1972. o Researchers survey a representative sample of 1,500 U.S. residents. o Face-to-face survey interviews are conducted in people's homes. o The NORC staff carefully select and recruit a diverse group of interviewers. o Interviewers are race-matched with respondents. o Interviews contain 500 questions and last 90 min. o The GSS datasets are publicly available for free. Limitations of Existing Statistics and Secondary Data - o The secondary data or existing statistics may be inappropriate for your research question. o The researcher may not understand the topic well enough and interpret the data incorrectly. o Fallacy of misplaced concreteness o The researcher can not find the appropriate unit of analysis The secondary data or existing statistics may be inappropriate for your research question. - You want to examine racial-ethnic tensions between Hispanics and Whites across the U.S. but only have secondary data that includes states in the Western part of the U.S. The researcher may not understand the topic well enough and interpret the data incorrectly. - The researcher uses data on high school graduation rates in Germany but does not know much about the German secondary education system and makes serious errors in interpreting the results. Fallacy of misplaced concreteness - occurs when someone gives a false impression of precision by quoting statistics in more detail than warranted. Example: From GSS data you calculate the percentage of divorced people is 15.65495 in order appear more scientific. It is much better to report approximately 15.7% of the population is currently divorced.
The researcher can not find the appropriate unit of analysis - A common problem in existing statistics is finding appropriate units of analysis. Many statistics are published for aggregates, not the individual Example: The researcher's question is: are unemployed people more likely to commit property crimes? The potential for committing the ecological fallacy (using larger units of analysis to draw conclusions about smaller units of analysis) would occur in this situation. Since you have aggregate data (data on the population) and you do not have data on individuals, in this case there is no way the researcher could measure if individual people who are unemployed are also the same individual people committing property crimes. Problems with Validity - o Definitions don't match o The researcher relies on official statistics as a proxy (replacement) for a construct and the official statistics have issues. o The researcher lacks control over how the information was collected. Definitions don't match - o A researcher's definition of a construct does not match the definition used by the government agency or organization that collected the data. Example: definition of unemployment: people who work if a good job were available, people who have to work part-time but want to work full-time, and those who have given up on looking for work. Official definition of unemployment: includes only those who are actively seeking work (full-time or part-time). The researcher relies on official statistics as a proxy (replacement) for a construct and the official statistics have issues. - Example: You want to know how many people are victims of hate crimes, so you use police statistics on hate crime as a proxy. This measure is not entirely valid because many victims do not report hate crime to the police. Official statistics do not always reveal all that occurred. The researcher lacks control over how the information was collected - A university researcher re-examined the methods used by the U.S. Bureau of Labor Statistics and found an error. Data on permanent job losses came from a survey of 50,000 people, but the government agency failed to adjust for a high non-response rate. Corrected figures showed a decline in the number of people laid off between 1993-1996 when the original report showed no change. Problems with Reliability - o Official definitions or the method of collecting information changes over time. o Equivalence reliability issues o Representative reliability issues. Official definitions or the method of collecting information changes over time. -
In the 1980's the method for calculating the U.S. unemployment rate changed. Old method (# of unemployed persons/# of persons in civilian workforce). New method (# of unemployed persons/# of persons in civilian workforce and military). Equivalence reliability issues - o The researcher realizes the measure yields inconsistent results across different indicators. Example: A measure of crime across a nation depends on each police department providing accurate info. Studies of police departments suggest that political pressures to increase arrests (to show dept is tough on crime) or decrease arrests (to lower crime rates prior to an election). Representative reliability issues. - o The researcher realizes that an indicator delivered inconsistent measures across a subpopulation. Example: The U.S. Bureau of Labor statistics found a 0.6 increase in female unemployment after it started using gender-neutral measurement procedures. Until the mid 1990's interviews commonly asked women if they had been "keeping house." If they responded "yes" they were recorded as housewives and not as part of the unemployed. Problems with Missing Data - Sometimes the data were collected but lost, more frequently the data were never collected. o Government agencies start and stop collecting information for political, budgetary, or other reasons. During the early 1980's cost-cutting measures by the U.S. federal government stopped the collection of information that social researchers found valuable. Government has recorded work stoppages and strikes in the U.S. since the 1890's. However 5 years of data are missing from 1912-1916 because the government stopped collecting data during those years. Ethical Concerns - -Privacy and Confidentiality -Official statistics being used as social and political products (for both progressive and conservative causes). Privacy and Confidentiality - Ethical concerns are not at the forefront of most nonreactive research because the people you study are not directly involved. It is very important for the researcher to maintain the privacy and confidentiality of participants when using data someone else has collected. Official statistics being used as social and political products (for both progressive and conservative causes). - Example: Official statistics collected by the government can stimulate public attention. Political activism can also lead government to collect data. Drunk driving became a public issue only after government agencies began collecting statistics on the number of automobile accidents in which alcohol was a factor. Activism pushed for data on then number of patients who die in public mental hospitals to be collected and the number of non-White students enrolled in U.S. public schools. Example: Some activist groups have argued to end the collection of certain types of data.
Conservative activists have pushed for police departments not to collect data on the race/ethnicity of persons pulled over while driving. Some libertarian groups have pushed for the U.S. Census Bureau to stop collecting data Uecker et al (2007): Losing my religion: The social sources of religious decline in early adulthood - Researcher used data from the National Longitudinal Study of Adolescent Health (a school- based three-part panel survey on health and related social behaviors) The researchers were interested in explaining declines in religious involvement that occur as adolescents move into adulthood. DV: religious involvement, importance of religion in one's life, feelings about organized religion. IV: college attendance, cohabitation, non-marital sex, drug and alcohol use Findings: Researchers found that people who went to college remained as religious as those who did not go, which runs contrary to popular belief. Increased drug/alcohol use and pre-marital sex were correlated with reduced importance of religion in one's life. Lieberson et al (2000): The instability of androgynous names: The symbolic maintenance of gender boundaries - An androgynous first name is one that can be for either a girl or boy without clearly marking the child's gender. Some argue the feminist movement decreased gender marking in a child's name. Researchers examined existing statistical data in the form of computerized records from the birth certificates of 11 million births of White children in Illinois from 1916-1989. Findings They found that androgynous first names are rare (3%) and that there has been a very slight historical trend toward androgyny. Parents are more likely to give androgynous names to girls than to boys. Androgynous names are unstable Behm-Morawitz and Mastro (2008): Mean girls? The influence of gender portrayals in teen movies on emerging adults' gender based attitudes and beliefs. - Mean Girls was a 2004 movie about teen girls who obtain rewards and feel pleasure by being socially aggressive. The researchers searched the internet to identify 90 U.S. teen films released between 1995-2005 and picked the 20 with the highest box office sales. They trained staff to code the following: socially cooperative behavior, socially aggressive behavior, and positive and negative consequences of behavior. In a statistical analysis the researchers found that both males and females were more often rewarded than punished for behaving in social aggression, with females significantly more likely to be rewarded. The researchers later interviewed college undergraduates and found that those who watched the most teen movies and most identified with characters indicated they were more likely to believe that social aggression is rewarded with increased popularity among peers.