Evaluation Focus and Question

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now

Evaluation Focus and Question

Introduction

Team Read, being a program aimed at assisting schoolers from the second to the fifth grade in terms of enhancing reading skills through high school students mentorship, encompasses the interaction with various stakeholders, including both students and mentors. For this reason, the major focus of the evaluation conducted by Margo Jones was the impact secured by the Team Read, including such factors as the reading skills improvement, impact on coaches, and current strengths and weaknesses of the program (Cornelia Ng & Lindenberg, 2000). The evaluation design developed for such an assessment included surveys and interviews, along with the quantitative evaluation of the students success both within and outside the program. Thus, it may be concluded that the evaluation design chosen by Jones is appropriate in terms of the assessments focus.

Measurements

The overall evaluation process can be primarily divided into three major subjects of assessment:

  • The measurement of the coaches satisfaction rates;
  • The measurement of program efficacy through the comparison of the results of pre-program and post-program skill evaluation;
  • The measurement of program efficacy through the comparison of reading skills among program participants and all district students not involved in the Team Read (Cornelia Ng & Lindenberg, 2000).

The major quantitative indicators of success are presented in the form of the results of the three major reading skills assessment tools: Developmental Reading Assessment (DRA), Iowa Test of Basic Skills (ITBS), and Washington Assessment of Student Learning (WASL). Considering the overall evidence-based model of education assessment, along with the embracement of the No Child Left Behind Policy, it would be reasonable to assume that the implementation of such quantitative measurements remains the most efficient way of quality assessment (Gall et al., 2015). Thus, the measurement method should be regarded as appropriate in the given setting.

Data

According to the researchers, data gathering, although not exclusive, is one of the primary aspects of the proper evaluation process (Rossi et al., 2004). Thus, considering the evaluation goals, Jones was primarily preoccupied with gathering quantitative data through collecting pre-test, and post-test results in 1999 and 2000, both among the Team Read participants and all district students. The later evaluation analysis was primarily based on the mean difference outlined. The level of satisfaction among the coaches was collected with the help of questionnaires filled in by the participants (Cornelia Ng & Lindenberg, 2000). Taking the setting into account, one may conclude that using such approaches to data collections is the most appropriate way to draw tangible quantitative results in order to outline some recommendations for the stakeholders.

Methodological Approach

Since the program evaluation addressed three major goals, three different methodological approaches were employed to analyze the results of the data collection. Thus, as far as reading skill efficacy was concerned, Jones applied statistical analysis to the quantitative indicators of pre-test and post-test assessment (Cornelia Ng & Lindenberg, 2000). Jones mentioned the possible liabilities in such an assessment, emphasizing the qualitative difference in reading tests for students in different grades. The level of coaches satisfaction was more subjective, as it encompassed a questionnaire with a small number of questions concerning the coaches experience. Finally, the methodology of analyzing the strengths and weaknesses of the program was a complex endeavor, which included interviewing primary stakeholders and analyzing data collected in the context of existing research literature on cross-age tutoring (Cornelia Ng & Lindenberg, 2000). Hence, it may be concluded that all the approaches to analyze the data were appropriate in the given setting, but the outcomes of the analysis could not be 100% reliable due to the presence of subjective analysis.

Conclusion

According to the research, the overall logic model of program evaluation encompasses such constituents as inputs, activities, outputs, outcomes, context, and impact (Frechtling, 2007). In terms of the present evaluation, the notion of outcomes was prioritized by Jones. Undeniably, the process of evaluation manifested statistical value for the reconsideration of the programs efficacy. However, one of the major weaknesses of such an evaluation is the lack of a roadmap to follow in order to secure the programs impact.

References

Cornelia Ng, S., & Lindenberg, M. (2000). Team Read (B): Evaluating the efficacy of a cross-age tutoring program in Seattle school district. Web.

Frechtling, J. A. (2007). Logic modeling methods in program evaluation. Wiley.

Gall, M. D., Gall, J. P., & Borg, W. R. (2015). Applying educational research: How to read, do and use research to solve problems of practice (7th ed.). Pearson.

Rossi, P. H., Lipsey, M. W., & Freeman, H. E. (2004). Evaluation: A systematic approach (7th ed.). SAGE Publications.

Need help with assignments?

Our qualified writers can create original, plagiarism-free papers in any format you choose (APA, MLA, Harvard, Chicago, etc.)

Order from us for quality, customized work in due time of your choice.

Click Here To Order Now