Quasi-Experimental Research: Nutrition

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

Quasi-Experimental Research: Nutrition

Bartholomew, Miller, Ciccolo, Atwood, and Gottlieb (2008) used the quasi-experimental design to evaluate the Walk Texas clinical counseling guide for nutrition. The quasi-experimental design was appropriate for the study because the authors focused on evaluating the efficacy of the intervention on specific demographics  women, infants and children (WIC). In this regard, it was not convenient for the researchers to conduct a random experiment with a specific population in focus. This way, the quasi-experimental design was appropriate for the study because it provided an alternative to using classical natural scientific methods in the study. The development of advanced statistical control techniques in experimental research also made it possible for the authors to use the quasi-experimental design because without them, they would have to use randomized experiments. These issues justify the use of the quasi-experimental design by Bartholomew et al. (2008).

Slack and Draugalis (2001) say research studies have different types of validity. Bartholomew et al. (2008) highlighted internal and external validities in their study. The internal validity differs from the external validity because it specifically evaluates whether the effects observed in the study come from the independent variable and not any other factor. Comparatively, external validity refers to a studys replication logic and explores if researchers can generalize it across other situations. In the case study, Bartholomew et al. (2008) did a good job at explaining the internal validity of the study because they showed different threats to it. For example, they showed that the failure to obtain a standardized frequency of delivery could have undermined the internal validity of the study. Similarly, the researchers did a good job at explaining the studys external validity. However, they failed to provide the link between replication and improved external validity. Nonetheless, they explained that the magnitude of the effect they observed in the study could offset the lack of experimental rigor in their study (Bartholomew et al., 2008).

Assessing whether a studys findings could be trusted is similar to assessing a studys validity. According to Slack and Draugalis (2001), researchers have proposed different kinds of interventions for assessing a studys validity. For example, according to Dinardo (2008), assessing a studys validity involves a four-step process that involves asking the following key questions. What is the nature of the research questions? What is the nature of alignment between the research question and the research design? How did the researchers conduct the study? And are there rival explanations for the studys findings? These pieces of information should be useful in assessing the statistical significance of research studies. In fact, they should be useful in making sure that the p-value for the studys validity level is 0.05, to mean that the studys findings are statistically significant (Slack & Draugalis, 2001). Such a statistical measure should eliminate the possibility that the studys validity is merely a product of chance variation. If we can prove that, the difference does not occur because of chance variation, researchers should make more efforts to prove the internal and external validity. Assessing the internal validity requires an observer to have sufficient information about a studys operational procedures and research design (Dinardo, 2008). These pieces of information should be useful in understanding how a studys procedures inform its findings. If observers prove that the findings are products of the studys treatment processes, the internal validity should be high. However, if we can confirm the studys findings are merely products of confounding factors, or bias, the studys internal validity should be low (Dinardo, 2008). Lastly, assessing a studys external validity requires a proper understanding of the inclusion and exclusion criteria used in the study. For example, in line with this reasoning, Slack and Draugalis (2001) underscores the importance of knowing the inclusion and exclusion criteria used to select the respondents. Studies with a high external validity often have a high alignment between the treatment and the nature of respondents used to evaluate the studied phenomenon. The case study contains all the information mentioned in this paragraph to investigate both the internal and external validities.

Although Bartholomew et al. (2008) have proved the internal and external validities of their study; my experience shows that quasi-experimental designs are often subject to internal validity issues because of the incompatibility between the treatment and control groups. Comparatively, randomized trials give an equal chance for researchers to allocate participants in the treatment or control groups. Quasi-experimental designs do not show this high level of internal validity because the differences between the control and treatment groups are not purely out of chance (Dinardo, 2008). Instead, some systematic factor may be the cause for the observed differences. In my experience, I also find that quasi-experimental designs may not openly demonstrate a strong link between the treatment conditions and the observed outcomes, particularly, if there are unaccounted confounding factors. Internal validity issues stem from this analysis because they sum up the approximate truth about the causal relationships underlying the research findings. For example, in the context of the case study, they would be useful in evaluating whether minimal interventions would change the eating patterns of the respondents. Therefore, internal validity is important for such quasi-experimental designs because understanding causal relationships are at the core of such research designs. In this regard, the main question to ask is, while trying to improve the internal validity of the case study, are there other possible reasons that could explain the possible outcomes observed in the study, besides the reasons proposed in the paper?

References

Bartholomew, J., Miller, B., Ciccolo, J., Atwood, R., & Gottlieb, N. (2008). Walk Texas! 5-a-day intervention for Women, Infants, and Children (WIC) clients: A quasi-experimental study. Journal of Community Health, 33, 297303.

Dinardo, J. (2008). Natural experiments and quasi-natural experiments. The New Palgrave Dictionary of Economics, 5(1), 856859.

Slack, M., & Draugalis, J. (2001). Establishing the Internal and External Validity of Experimental Studies. Am J Health Syst Pharm, 58(22), 1-10.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now