Docsity
Docsity

Prepare for your exams
Prepare for your exams

Study with the several resources on Docsity


Earn points to download
Earn points to download

Earn points by helping other students or get them with a premium plan


Guidelines and tips
Guidelines and tips

Quantitative Research Methods Semester 1 Exam Study Guide 100% GUARANTEED PASS 2024/20, Exams of Organization and Business Administration

Quantitative Research Methods Semester 1 Exam Study Guide 100% GUARANTEED PASS 2024/2025 CORRECT

Typology: Exams

2024/2025

Available from 10/28/2024

darine-4
darine-4 🇺🇸

3.7

(7)

3.6K documents

1 / 27

Toggle sidebar

Related documents


Partial preview of the text

Download Quantitative Research Methods Semester 1 Exam Study Guide 100% GUARANTEED PASS 2024/20 and more Exams Organization and Business Administration in PDF only on Docsity! Quantitative Research Methods Semester 1 Exam Study Guide 100% GUARANTEED PASS 2024/2025 CORRECT Communication is a field in? Social Science Communication can be defined as: "The process by which one person stimulates meaning in the mind/s of another person/people through verbal and nonverbal messages" SMCR Source --> Message --> Channel --> Receiver The Source in the SMCR is? The person that creates message The Message in the SMCR is either? Verbal or Nonverbal The Channel in the SMCR is? The way in which a message is delivered The Receiver in the SMCR is? The person that gets the message and assigns meaning to the message The Scientific Approach Is a method which holds a body of techniques for investigating phenomena, acquiring new knowledge, or correcting/integrating previous knowledge. Research The scientific way to acquire knowledge Three biased and nonscientific ways of acquiring knowledge are: 1. Personal Experience 2. Intuition 3. Authority Personal Experience Experiencing first hand Intuition Instinct, going with the gut Authority Receiving information based on someone else's bias Objectivism Without Bias Empiricism Based on observation and experience What are the four steps of research? Theories --> Predictions (hypotheses) --> Observations --> Empirical Generalizations Theory To produce theoretical principles that simplify and explain apparently complex related communication processes What are the goals of an Applied Research? To provide knowledge that can be immediately useful to a policymaker who seeks to eliminate or alleviate a communication problem What is used as a guiding theory in Basic Research? Other scholars' theoretical perspectives What is used as a guiding theory in Applied Research? Any idea that holds promise of changing an unsatisfying situation into a more desirable one Overall, the three things Basic Research could aim to do is: 1. Test theories 2. Solve a contradiction between two theories 3. Elaborate on a theory Overall, Applied Research is designed to: Solve practical 'real world' society relevant problem What are the two basic research designs? 1. Experimental 2. Correlational Operationalization The exact operations we use to measure our variables What can be inferred in a correlational design? An association between two variables What can be inferred in an experimental design? A causal correlation between two variables In a correlational design the IV and DV are: both measured In an experimental design the IV and DV are: The IV is manipulated while the DV is being measured Correlation As the quantity of one variable changes, the quantity of the other variable changes Positive Correlation When both variables go up or down (change together) Negative Correlation When one variable goes up and the other goes down Difference Hypothesis The researcher determines what the levels of one variable are, are then measures other variable What are the steps of reporting a research? 1. Title 2. Abstract 3. Introduction 4. Method 5. Results 6. Discussion 7. References A "Title" in a research report: Involves the key theoretical variables, their relationship, and often times a "teaser" (becoming more and more popular). An "Abstract" in a research report: Is a brief summary of the study which appears at the beginning of the article. It informs about the topic that was investigated, how it was investigated, and the major findings of the study, as well as the theoretical and/ or practical implications of the study's findings. What is the structure of the "Introduction" in a research report? The formation of this paragraph include: 1. A broad research problem 2. A review of related theories and previous research 3. Proposed theory (general) 4. Proposed hypothesis (more specific) The "Method" in a research report: Provides a thorough description of the research methods used in the study What are the three subsections the "Method" consists of in a research report? 1. Participants 2. Materials 3. Procedure The "Results" in a research report: Include a summary of the raw data and the statistical analyses that were done in the study. They do not include any conclusions that are based on the data. But they do include graphs and/or tables. The "Discussion" in a research report: -Begins with a summary of the results of the study, and an evaluation as to whether or not the empirical findings support the original hypotheses. -Presents the theoretical and practical implications of the study's findings. -Compares the findings with past research findings. Similarities and differences are described. -Includes information about the limitations of the study, as well as suggestions for further studies in order to overcome these limitations. The "References" in a research report: Error Variance Why is Random Assignment used as a method? To prevent or rule out other explanations Manipulation of the IV The researcher purposely alters or changes the IV to see the alteration has an effect on the DV. He/she creates at least 2 conditions (experimental group and control group) - at least 2 levels of the IV. How does the researcher manipulate the IV when he/she wants to see the manipulation's effect on the DV? The researcher provides the experimental group a specific stimulus or phenomenon, not giving the same stimulus or phenomenon to the control group What does the researcher rely on when manipulating the IV? (Three options) -Written materials or audio/video records to manipulate the participants -A confederate (someone who is playing along in the experiment) - Hypothetical scenarios and role-playing activities Measurement of the DV (done in three ways) -Self report scales -Observations -Physical measures Controlling the experiment Making sure that the researcher is examining what he/she intended to examine What are three things that you need to be aware of when controlling the experiment? 1. Threshold Effects 2. Experimenter Effects 3. External Effects Thresholds Effects When changes in a specific DV are only seen after the IV has reaches a certain level Experimenter Effects Caused unknowingly by the experimenter on the participants External Effects Are variables that are not being measured in the study but are affecting the result Mediating Variable -One variable relates to the other and will relate to next Intervening. It answers WHY the IV causes the effect in the DV. Example: Class Attendance --> Material Understanding --> Grade in the test In this case, the "Material Understanding" is the mediating variable Moderating Variable A variables that affects the strength of the relationship between the IV and the DV. The moderator sets the boundaries of the effect (when is it likely to be stronger? Weaker? Disappear?) The stages of actually conducting an experiment (five): 1. Introducing the experiment to participants and obtaining participants consent 2. Random assignment 3. Manipulation of the IV + manipulation check 4. Measure the DV 5. Debriefing (Questioning the process done so far to make sure that it is correct) The stages of actually conducting a non-experimental study (three): 1. Introducing the experiment to participants and obtaining participants consent 2. Measure the DV 3. Debriefing (Questioning the process done so far to make sure that it is correct) Confounding Variable Extraneous variable that correlates with both the IV and the DV. (A variable that affects the IV and DV simultaneously) Example: Working hours affects both class attendance and grade in test. Population Entire set of objects, observations, or scores, that have some characteristic in common Steps of a Sampling Process (three): 1. Identify theoretical population 2. Identify sampling frame (list of all potential participants in the study that are accessible to the researcher) 3. Identify the sampling method that will be used Sample People or units that a researcher actually includes in the study Why is sampling used? To represent the population (because, the entire population cannot be used in the study) What are the two types of sampling methods? Non-Probability Method: Selecting participants to fulfill or meet a specific purpose the researcher has in mind Netowork Sampling Non-Probability Method: Asking participants to refer researchers to other people who could serve as participants ("making connections") Sampling Error Chances of making an error when using a non-probability research method, and more often than when using probability research method Measurement The process of systematic observation and assignment of numbers to phenomena according to rules We define the phenomenon (what we attempt measuring) - e.g. length We define how we are going to assign numbers (set of rules) - e.g. centimeters Levels of Measurement When going up, each level includes the characteristic of the former scale and adds more, ending up as the highest in hierarchy: -Nominal (lowest) -Ordinal -Interval -Ratio (highest) Nominal Scale -Categories of this scale may be named by words (male/female/yes/no) or numbers (phone/car number) Variables must be classified into at least 2 categories, and these categories must be: -Mutually exclusive (not overlap and include all options) -Equivalent (have same values) -Exhaustive (must have a way to categorize participants) Ordinal Scale -Classifies variable into nominal categories but also ranks order -Ability to compare (more/less, 1st/2nd/3rd place) -Provides more info because transforms discrete classifications into ordered classifications What is the limitation concerning ordinal scales? They rank order but don't tell the researcher how much more or less of a difference there is between variables Interval Scale Categorize variable, rank order, but also establish equal distances between each point along scale (includes an arbitrary [unspecified], zero point on the scale) In an example, how are the nominal, ordinal, and interval scales different? When looking at temperature: Nice day: (nominal) Today is nicer than yesterday (ordinal) It is 25 degrees Celsius outside (interval) Ratio Scale Not only categories and rank order a variable along the scale with equal intervals between adjacent points, but also establish an absolute (or true) zero point where the variable being measured ceases to exist Example: age, weight, height, number of hours watching a specific show, etc. Three Measurement Methods 1. Self reports (and Other's reports) 2. Behavioral acts 3. Physiological measures Self Reports Anytime a person reports on his attitudes, feelings, and judgments What are the pros and cons of self reports? Pros: simple, direct. Can be close or open ended Cons: dishonest, sometimes you can't/don't want to report how you feel Close-ended Involves answering question that people responded by choosing a number on a scale Open-ended Ask question and let them answer whatever they want. Content judged by coders and requires coding scheme Behavioral Measures Recording of behavior that is directly observed. These measures can be either reactive or not (depending on whether people know they are being recorded) What are the pros and cons of observing behavioral measures? Pros: measurements more accurate, don't know they're recorded Cons: measures might not be sensitive enough, ethical issue, can be more expensive than self- reports Reactive Knowing that you are being recorded External and internal factors that affect measure such as noise, temp in room, fatigue and mood of participant Content of Measurement Characteristics of specific set of questions by which we chose to measure variable that affect measure but not relevant to variable Cronbach's Alpha Mathematical equation that tests correlation between one item to others Validity of the Measurement How well and accurate researchers measure what they intended to measure. The more closely measured data reflects the theoretical concept, the more valid the measurement is What four things should a measurement be? 1.Reliable 2. Consistent 3. Stable 4. Valid Reliability and Validity in relation to each other: All valid measurements are reliable but not all reliable measurements are valid Content Validity Making sure the measurement includes relevant content and no irrelevant content Face Validity Measure of how representative research is at face value (does it seem to measure what it intended to?) Panel Approach Asking qualified people to describe aspects of variable or to agree that instrument taps concept being measured Criterion Related Validity Established when a measurement technique is shown and related to another instrument or behavior (criterion) that is already known to be valid Three way's to asses measurement's validity: 1. Content validity 2. Criterion-related validity 3. Construct validity Criterion-Related Validity Is established when a measurement technique is shown to relate to another instrument or behavior (called criterion) already known to be valid Two types of Criterion-Related Validity: 1. Convergent/Concurrent 2. Predictive Concurrent/Convergent Validity When results from new measurement agree (concur/converge) with those from existing known-to be valid criterion. Concurrent Validity Scores on the measure are related to a criterion measured at the same time 2 or more groups of people differ on measure in expected ways Convergent Validity Scores on the same measure are related to other measures of the same concept Predictive Validity How well a measurement instrument forecasts or predicts an outcome variable Discriminant Validity Scores on the measure are not related to other measures that are theoretically different Construct Validity Various theoretically guided methods that shows the extent to which we are actually measuring the theoretical concept we intend to measure. External Validity The ability to generalize the findings from a study. The question is whether the conclusion from a particular study can be applied to other people/objects/etc. If a study is externally valid, the conclusions drawn from it are not limited to the particular people/objects studied. Internal Validity The accuracy of the conclusions drawn from a particular study. The study is designed and conducted (methodology) such that it leads to accurate findings about the phenomena being investigated, for the particular group of people/objects studied Two general types of the conclusion's validity: Internal and External Internal Validity is compromised by three general threats: 1. How the research is conducted 2. Effects due to participants 3. Effects due to researchers Measurement validity and reliability In order to have confidence in the conclusions drawn from the study, the measurements must be reliable & valid. Procedure validity and reliability Statistical regression (regression toward the mean) Internal Validity (Effects Due to Participants): The tendency for individuals or groups that were selected on the basis of initial extreme scores to get less extreme scores on the second and subsequent measures. Ex: Very liberal participants + very conservative participants are asked questions and give very different answers. They then watch the same video. Are asked the same questions again and, this time, the answers they give are less extreme. Researcher Personal Attribute Effect Internal Validity Effect (Effects Due to Researchers): Particular characteristics of a researcher influence participant's behavior (race, gender, age etc.) Solution: employ a variety of RA's or determine beforehand which characteristics might affect participants responses and try to avoid them. Researcher Unintentional Expectancy Effect (Pygmalion Effect) Internal Validity Effect (Effects Due to Researchers): The researchers influence participants responses by unintentionally letting them know the behavior they desire. Referred as "demand characteristics" - the researcher "demanding" specific behavior. Solution: "blind" RA's and follow an exact standard procedure for all participants. Researcher Observational Biases (three types): Internal Validity Effect (Effects Due to Researchers): The people (researcher, research assistants) that observe the participants demonstrate inaccuracies during the observational process: 1. Observer drift 2. Observer Bias 3. Halo Effect Observer Drift Observer become inconsistent in the criteria used to make and record observations (lengthy observation). Solution: fresh observers. Observer Bias Observer's knowledge of research purpose and or hypotheses influence their observation. Solution: blind observers. Halo Effect Occurs when observer make multiple judgments of the same person, and hence overrate or underrate him. Solution: employing different observers. What are the three factors which relate to external validity? 1. How participants were selected (sampling). 2. Whether procedure used mirror real life (ecological validity). 3. The need to replicate research findings (replication). In an experiment there are at least 2 groups - experimental group and control group. Therefore we can do what two things? 1. Random assignments of participants 2. Ensure equal experimental environment Random assignments of participants Eliminates initial differences between groups. Deals with threats of selection & statistical regression. Ensure equal experimental environment Different aspects of the environment are held constant (such as time of the day, location, etc.). Deals with threats of sensitization, Maturation, history. In the control experiment what distinguishes one end of the continuum from the other is the way in which researcher: a. Exposes participants to the IV. b. Rules out initial differences between conditions. c. Controls for the effects of external influences. Control Is not present or not. There is a continuum, ranging from loosely controlled experiments to very tightly controlled experiments Disadvantage of experiments: 1. Ecological validity. 2. Not always possible (time, money, ethical issues, un-manipulated variables). Therefore we commonly use other methods ways to examine our research questions: -Surveys (typical correlational design) -Nonreactive measures (e.g. natural observation, content analysis).