Study with the several resources on Docsity

Earn points by helping other students or get them with a premium plan

Guidelines and tips

# Exam 2 Study Guide - Methods of Political Science | POLS 200, Exams of Political Science

Material Type: Exam; Class: Methods of Political Science; Subject: Political Science; University: University of Illinois - Chicago; Term: Unknown 1989;

Typology: Exams

Pre 2010

10 documents

## Partial preview of the text

Download Exam 2 Study Guide - Methods of Political Science | POLS 200 and more Exams Political Science in PDF only on Docsity! 1 Pols 200: Methods of Political Science Exam II Review This exam will cover chapters 3, 6-12 in Babbie, the supplemental readings we discussed in class (Segal and Cover, CT Crackdown, & Schneider et al. articles), and lecture material (including the Milgram Experiment video). Chapter 6:Indexes, Scales and Typologies Content validity Unidimensionality Item analysis Bivariate relationship Index validation External validation Likert scale Correlation What are some reasons composite measures are used in quantitative social science research? What is the difference between an index and a scale? What are some similarities? What are the steps for constructing an index? How can you tell if the index is valid? What are the levels of measurement for variables (this is from pre-chpt 6 material) Chapter 7: Sampling Probability and nonprobability samples Types of nonprobability samples (Reliance on available subjects, Purposive/Judgmental, Snowball, & Quota) Informants Parameter Statistic Population/study population Sampling frame Sampling unit/observation unit EQEM: equal probability of selection method Sampling error What was the Literary Digest fiasco and why was it important? What are the two main advantages of the EPSEM (equal probability of selection method) of sampling? What are the two main sources of sampling error? Modes of Observation/Research Design What is a research design? How does research design fit together with the process of hypothesis testing? Chapter 8: Experiments Control & experimental group “Treatment” Pre-test/Post-test Internal validity of a research design External validity of a research design Hawthorne effect 2 Placebo Double blind experiment Threats to internal validity: History, maturation, testing/learning, instrumentation, instability, regression to mean, experimental mortality/attrition, selection Threats to external validity: Reactive arrangements, representativeness of sample Why is the classical experiment the ‘ideal’ research design? What are the basic characteristics of the classical experiment? How does the classical experiment measure up to the criteria for causality? Explain the specific ways in which each of the common threats to the validity of a research design influences the researcher’s ability to make valid descriptive or explanatory inferences (see CT Crackdown article too). What are the strengths and weaknesses of the classical experiment? What are some ways in which the classic experimental design can be altered? What are some different ways that subjects can be assigned to the control/experimental group? Under what conditions would you use these different assignment strategies? What was the basic research question investigated in the Milgram experiment? How was the experiment designed and in what ways did the researchers alter the experimental treatment? How did they attempt to control for potential threats to internal validity? What were the main findings of this research? Chapter 3: The Ethics and Politics of Social Research Voluntary participation No harm to participants/norm of informed consent Anonymity Confidentiality Debriefing What was the ethical dilemma of the Tearoom Trade? What was the ethical dilemma of the Milgram experiment? How did the researchers address some of these issues? Why are evaluation studies more at risk to political pressure? Explain. What are some ethical concerns of evaluation studies/quasi-experiments? Chapter 12 Evaluation Research Quasi-experimental designs Social intervention Matched pair/Non-equivalent control group design Interrupted time series Controlled/Multiple time series designs Qualitative design/case study Why does evaluation research lend itself to quasi-experimental designs? In what specific ways does a quasi-experiment differ from the classical experimental design? What are some variations of the classical experiment? What are the strengths and weaknesses of these quasi- experimental designs? What are some problems (including ethical concerns) with evaluation studies? How do different quasi-experimental designs measure up to the criteria for causality? What the measurement issues that arise in evaluation research?