Research design

Empirical research is often conducted to establish evidence of a phenomenon or relationship, to examine and evaluate causal relationships, or to provide an explanation of how, why, and under what conditions a causal relationship exists. The purpose of the research often dictates the research design (strategy to investigate your research questions) we adapt to validate, extend, or refute our understanding of a social phenomenon. Research design, in turn, dictate the logic that affects our ability to draw sound conclusions. It affects internal validity, it affects statistical inference validity by controlling who and how they are used in the study, construct validity in terms of its ability to control confounding variables, and external validity in terms of its ability to control other variables.

Research design can be broadly divided into experimental design (research) and non-experimental design also known as correlation design. Let’s unpack these designs and see what features they offer.

Experimental design

Shadish, Cook, and Campbell (2002) defined an experiment as “a systematic study designed to examine the consequences of deliberately varying a potential causal agent” (p. xvii). That is an experimental research design is one in which the independent variable (IV) is fixed by manipulation or a natural occurrence. Manipulation of IV simply means that we can decide the nature of the treatment, to whom it will be applied, and to what extent. Simply put, it is the process through which we purposefully change, alter, or influence our IV. Once the treatment is administered we can measure change in the dependent variable (DV) to see if the treatment made any difference. The impact is then analyzed through statistical analysis. All that is left is to follow the same rule of logical deduction to establish causality: covariation, temporal precedence, and ruling our rival explanations.

By manipulating the IV and looking for a change in our DV we can establish covariation and temporal precedence. This enables us to go beyond description and prediction, and beyond identification of relationship and allows us to establish, at least partially, the causal effect. This is a unique feature of experimental research. However, we do need to control rival explanations (confounding variables) to make causal inferences. The greater the ability to control rival explanations greater the internal validity of the study. Research design/s can provide varied support for controlling rival explanations depending on the use of, or lack of, design elements in a research study. That is, research design range along a continuum of internal validity. So what are these design elements?

Design elements are features that can be manipulated or controlled by the researcher to help address threats to validity. These design elements include assignment to differently treated groups (random or natural), measurement (pretest, posttest, non-equivalent dependent variable, multiple pretest and posttest, moderating variable), comparison groups (equivalent, non-equivalent, cohort), and treatments (repeated treatment, removed, etc.). We often choose a combination of these design elements to suit particular circumstances and the purpose of our research. At the same time, it is the presence, absence, or combination of these design elements that allows us to analyze the strengths or weaknesses of our research design to provide causal inferences. For simplicity, experimental designs can be categorized into pre-experiments, quasi-experiments, and true experiments based on their strength for causal inference based on the design elements included.

Design elements in experimental research
Design elements in experimental research

True experiment

A true experiment is considered to be the hallmark of causal research. It is characterized by the manipulation of one or more IVs, controls over other variables, the presence of a control group, and measures of one or more DVs of interest. That is, it should have at least two groups (one treatment and one control/comparison) and use random assignment. Let’s look at an example.

Suppose we are interested in observing the effect of room temperature on the typing speed of participants. Our IV is therefore room temperature while our DV is typing speed. Since this is experimental research we can manipulate our IV room temperature, and observe the effect of typing speed. Let’s say we have access to a law firm. We can select five offices and randomly select twenty typists from each office. We can set our experiment nicely as below. Let’s say that, normally the office temperature is set at 70 oF. Then the mean typing speed for Office 3 can serve as our control group. Suppose we get the following results.

Room temperature (oF)Mean typing speed
Office 15053 wpm
Office 26070 wpm
Office 370100 wpm
Office 48087 wpm
Office 59060 wpm
Effect of room temperature of mean typing speed

From what we observe it would appear that a room temperature of 70 oF might be ideal to achieve the highest typing speed. But what if we realize that Office 3 is full of senior typists? That would explain the results, won’t it? Seniority, here is the confounding variable. In other words, it is our rival explanation for the causal link between the room temperature of 70 oF and the highest typing speed. We know that a true experiment needs two conditions to be met. It should have a control group. The control group is created through random assignment. We sampled typists from each floor randomly but we did not assign the typists to different room temperatures/offices. We can use random assignment to assign all 100 typists into different groups randomly.

Random assignment, in theory, should systematically eliminate all pre-existing differences such as seniority between the groups. As long as the random assignment is carried out faithfully, we can be sure that any difference between the groups is purely by chance. However, we can take care of such chance variation by setting our alpha (level of significance) to less than 0.05. We can also use ANCOVA to statistically control confounding and control variables in a true experiment. Office 3 can still be our control group and the other four can be our treatment group. Now we can measure the mean typing speed for all offices and perform statistical analysis to eliminate any chance variations.

Some principles of a good experimental design to increase statistical power include:

  1. Using large samples
  2. Developing and using reliable measures
  3. Selecting homogeneous population
  4. Using repeated measures on the same subjects
  5. Using standardized processes and procedures for research (reducing bias)

Diagramming research

Daigramming research
Depending on our goal, we can have variations of true experimental design implemented. Before we go into design, let’s look at how to diagram research designs. Diagramming is a graphical way to represent research design. It makes it easy to identify designs and their strengths and weaknesses based on design elements used. The rules of diagramming are very simple. “O” represents observations of our dependent variables, X or XT stands for treatment or experimental interventions, the absence of X or XC represents the control group, and R stands for random assignment. Each group is represented by a separate line of observations.

The true experimental design used in the above example is the post-test-only control group design. This design requires less time and resources. This design can be represented as below:

R	X50				O
R	X60				O
R	X70-Control		O
R	X80				O
R	X90				O

Pretest-posttest design

The second common design is the pretest-posttest. Here participants are randomly assigned to either the treatment group or the control group. There is a pretest of the DV for both groups, then the treatment is administered, and a posttest measure is taken. Although random assignment is usually successful, it can sometimes fail. In such a case there will be preexisting differences between the two groups. A pretest measure will be able to verify this and could check for the pretest equivalency of the groups. This is important when the group size is small (less than 30). If we find nonequivalent groups, we can use statistical measures (analysis of covariance – ANCOVA) to correct this error. Another option we can use is to match pairs of individuals on certain variables. This choice of variable used for matching is generally dictated by previous research, theory, and experience of the researcher. The members of each pair are then randomly assigned to either the treatment or control group. Pretesting is also advantageous when there is the problem of attrition. However, including a pretest can have its disadvantage. Pretests can have a priming effect (pretest sensitization) where the participants might get familiar with the desired outcome which can influence their posttest scores.

R    O    XT    O
R    O    XC    O

Solomon four group design

But sometimes it is difficult to remove the priming effect. For example, if you were in a study related to nutrition then a pretest might be what you had eaten in the last couple of days. The pretest might remind you to eat healthier before you even start the study! This is where Solomon four group design is useful. Here is how it is set up.

R	O	XT 	O
R	O	XC 	O
R		XT 	O
R		XC	O

Two groups get the pretest and two groups don’t. Group 1 allows us to see if experimental manipulation was effective. Group 2 will allow us to see if the difference between pretest and posttest measures is separate from the effect of treatment itself. Groups 1 and 3 will help us to see if the pretest had the priming effect (also known as pretest by treatment interaction). Groups 2 and 4 should help us identify any change that pretest or time causes (maturation and history). Ideally, we would like to see no difference between 2 and 4 groups. This design provides the best possible control of threats to internal validity. But this design also can aid external validity if the effect size of the pretest is smaller. However, it is easy to see that this design can be a resource and time-consuming. It also requires a large sample. Because of these disadvantages, researchers often prefer to avoid studying pretest sensitization.

Quasi experiment

It is not always possible to use random assignment and sometimes it is not ethical. For example, if we are interested in studying the effect of smoking marijuana on brain health using random assignment is not ethical. It is not possible to use random assignment, for example, if you are interested in studying the effect of vocational training of prisoners on recidivism. In such cases, we use naturally existing groups where the IV is manipulated by natural occurrence. That is, we find marijuana smokers and compare them to non-smokers making sure that these two groups are as equivalent as possible. Or we find incarcerated individuals who participated in vocational training and compare it to those who did not.

This type of research is conducted in a natural setting. The IV is determined based on the natural group. That is, smoking marijuana or taking vocational training while being incarcerated. DV is usually some type of individual outcome, such as brain health or recidivism in the above examples. We can use multiple design elements to establish causal relationships in quasi-experiments. Since the treatment group and control group are not created through random assignment, our ability to have causal inferences is limited by our ability to control confounding variables.

One feature often used to compensate for non-equivalence is having a pretest or baseline measure of important variables. Quasi-experiments use design elements, practical logic, and statistical controls to infer plausible causes of the effect. Quasi-experiments, thus, have a moderate ability to control threats to internal validity compared to true experiments. Some quasi-experimental designs are inherently longitudinal (data collected more than once), observing DV over time, but other designs also can be made longitudinal by adding more observations. More treatment groups can also be added and more than one control group can also be added, particularly while using non-equivalent control designs. Quasi-experiments are often referred to as field experiments as they take place in the natural world rather than a laboratory setting.

Interrupted time series (ITS) design

This design has the potential to provide strong causal inferences. As the name suggests this design has a periodic measure of pretest and posttest. It usually has just one group. The multiple measures of the pretest can help us establish a consistent measure of DV over time. It increases our ability for causal inference if a change in DV is then observed.

	O	O	O	O	O	XT	O	O	O	O	O

Let’s look at an example. Suppose the government is interested in increasing the speed limit of a particular highway from 65 to 85 miles per hour but is concerned about the effect it might have on fatal accidents. A single group ITS can effectively measure the effect of change in the speed limit. Here the pretest observation of our DV, number of fatal accidents, can serve as a control group. We can observe the effects of increasing the speed limit (implemented from July onwards) on our DV and then just run a statistical test of significance.

O	O	O	O	O	O	XT	O	O	O	O	O	O	
Jan						July					Dec

There are a few caveats that we should consider before inferring causality in such designs. For example, there may be seasonal variations such as winter weather might contribute to the rise in number of fatal accidents. If we suspect that we can always have pretest and posttest measurements for a whole year. This can eliminate seasonal variation. We can also look for comparable control groups such as comparable data availability where the speed limit was not increased for all those months.

This type of design is most commonly used for measuring the effectiveness of social interventions. However, we should be mindful of the delayed effect of the treatments as often it takes time and personnel to get ready to implement. Innovations also take time to diffuse and get accepted. Thus, we should use multiple design elements and proper statistical controls to strengthen our causal inference abilities. When done properly, these designs are the best alternative to a true experiment.

Nonequivalent control goup design

Often referred to as the observational study, this is a simple design where two (or more) naturally occurring groups are studied with pretest and posttest measures.

O	XT	O
O	XC	O 

For example, we are interested in measuring the psychological impact of war trauma. We take two groups of people, those who are exposed to war trauma and those who are not. We make a pretest observation on our DV. We attempt to remove all differences measured at the pretest and then compare it to the posttest measures. We then compare the posttest measures between the groups through the significance test.

As you might have seen, this design’s ability to control confounding variables depends on our ability to measure and statistically control all covariates (control and confounding variables). Since it is not possible to control for all covariates we are unlikely to make strong causal inferences. However, we can use mimicking or matching to create groups and that should provide more confidence if the period of measurement between the pretest and posttest is either not too short or not too long for variation in the psychological impact. Another obvious strategy is to employ multiple pretest and posttest observations. Rosenbaum (1987) also identified creating multiple control groups to reduce the effects of confounding variables. Our ability to make a causal inference is limited to our ability to find an equivalent control group.

Pre-experimental designs

Pre-experiments are the simplest forms of designs that allow us to see the effects of manipulation in IV over DV without strong control. On the continuum of internal validity pre-experimental designs rate as the weakest research designs as they lack substantial ability to control rival explanations. These designs are considered crude and are largely avoided.

Single group, posttest only

This is the weakest form design. There is just one group and the effect is measured after we administer the treatment. This design does not allow us to know whether the change observed in the DV is because of the treatment or pre-existing differences. It also does not allow us know if the change is due to IV or due to other confounding variables.

XT	O
Within group (pretest-posttest) design

As the name suggests this design lacks a control group. It only has one treatment group and has a pretest and posttest observation of DV. The obvious weakness of this design is a lack of control group and its ability to control rival explanations.

O	XT	O
Between group design

Also referred to as static group comparison design, it involved having a control group and a treatment group created without random assignment (pre-existing groups) and having a posttest measure of DV. Since it uses pre-existing groups and lacks pretest measures, we can never be sure whether the change in DV is caused by the change in IV or by any other extraneous variable.

XT	O
XC	O

Conclusion

We must design our study in a manner that helps us fulfill the purpose of our study. However, research design is impacted by our control of different variables of interest. The table below summarizes the strengths of the designs discussed above. Correlational studies offer very little control over variables. The variables are merely observed without the use of strong design elements. We use statistical analyses to examine the relation between two or more variables in a correlational study. Data is simply observed and not manipulated. On the other hand, quasi-experiments offer a moderate level of control while true experiments offer the highest levels of control over variables of interest. Quasi-experiment attempts to establish a causal relationship in a natural setting where the data is manipulated by natural occurrence between two pre-existing groups. We use statistical controls to balance the internal and external validity of the study. True experiments are used to establish causal relationships between variables of interest under a strictly controlled laboratory environment where the IV is manipulated by us and groups are created using random assignment. This design provides the strongest control for internal validity but may weaken the external validity. Next we will look at threats to internal validity.

CorrealtionalPre-experimentQuasi-experimentTrue-experiment
# of conditions / treatmentNoneOne or moreOne or moreTwo or more
Use of random assignmentNoNoNoYes
Use of pretestNoSometimesYesSometimes
IV manipulationNoYes – by researcher or natural occuranceYes – natural occuranceYes -resarcher
Plausable rival explainationsManyManyModerateVery Few
Level of controlNoneWeakModerateStrong
Differences across research designs

Sources:

Campbell, D. T., & Stanley, J. C. (1963). Experimental and quasi-experimental designs for research on teaching. In N. Gage (Ed.), Handbook of research on teaching (pp. 171–246). Chicago: Rand-McNally.

Cochran, W. G. (1965). The planning of observational studies of human populations (with discussion). Journal of the Royal Statistical Society, Series A, 128(2), 134–155. doi: 10.2307/2344179

Fisher, R. A. (1935). The design of experiments (7th ed.). New York: Hafner.

Reichardt, C. S., & Mark, M. M. (2004). Quasiexperimentation. In J. S. Wholey, H. P. Hatry, & K. E. Newcomer (Eds.), Handbook of practical program evaluation (2nd Ed., pp. 126–149). San Francisco: Jossey-Bass.

Rosenbaum, P. R. (1987). The role of a second control group in an observational study (with discussion). Statistical Science, 2(3), 292–316. doi:10.1214/ss/1177013232

Shadish, W., Cook, T., & Campbell, D. (2002). Experimental and quasi-experimental designs for generalized causal inference. Boston, MA: Houghton Mifflin.

Solomon, R. L. (1949). An extension of control group design. Psychological Bulletin, 46(2), 137–150. doi:10.1037/h0062958

Willson, V. L., & Putnam, R. R. (1982). A meta-analysis of pretest sensitization effects in experimental design. American Educational Research Journal, 19(2), 249–258. doi:10.2307/1162568

Cite this article (APA)

Trivedi, C. (2024, April 5). Research design. ConceptsHacked. Retrieved from https://conceptshacked.com/research-design

Chitvan Trivedi
Chitvan Trivedi

Chitvan is an applied social scientist with a broad set of methodological and conceptual skills. He has over ten years of experience in conducting qualitative, quantitative, and mixed methods research. Before starting this blog, he taught at a liberal arts college for five years. He has a Ph.D. in Social Ecology from the University of California, Irvine. He also holds Masters degrees in Computer Networks and Business Administration.

Articles: 36

Leave a Reply

Your email address will not be published. Required fields are marked *