ePrints Repository

Where does good evidence come from?

Gorard, Stephen and Cook, Thomas (2007) Where does good evidence come from? International Journal of Research & Method in Education, 30 (3). pp. 307-323. ISSN 1743-727x

PDF - Adobe Acrobat 9 Pro (132Kb)

URL of Published Version: http://dx.doi.org/10.1080/17437270701614790

Identification Number/DOI: 10.1080/17437270701614790

This paper started as a debate between the two authors. Both authors present a series of propositions about quality standards in education research. Cook’s propositions, as might be expected, concern the importance of experimental trials for establishing the security of causal evidence, but they also include some important practical and acceptable alternatives such as regression discontinuity analysis. Gorard’s propositions, again as might be expected, tend to place experimental trials within a larger mixed method sequence of research activities, treating them as important but without giving them primacy. The paper concludes with a synthesis of these ideas, summarising the many areas of agreement and clarifying the few areas of disagreement. The latter include what proportion of available research funds should be devoted to trials, how urgent the need for more trials is, and whether the call for more truly mixed methods work requires a major shift in the community.

Type of Work:Article
Date:2007 (Publication)
School/Faculty:Colleges (2008 onwards) > College of Social Sciences
Department:Department of Education and Social Justice

Angrist, J.D., Imbens, G. W., & Rubin, D. B. (1996). Identification of Causal Effects using Instrumental Variables. Journal of the American Statistical Association, 91, 444-472.

Campbell, D.T., & Stanley, J.C. (1963). Experimental and quasi-experimental designs for research. Chicago: Rand-McNally.

Coleman, J.S. (1966). Equality of Educational Opportunity.

Cook, T.D. (1991). Clarifying the warrant for generalized causal inferences in quasi-experimentation. In M.W. McLaughlin & D.C. Phillips (Eds.), Evaluation and education: At quarter-century (pp. 115-144). Chicago: National Society for the Study of Education.

Cook, T.D. (2002). Randomised Experiments in Educational Policy Research: A Critical Examination of the Reasons the Educational Evaluation Community has Offered for Not Doing Them. Educational Evaluation and Policy Analysis, 24(3), 175-199.

Cook, T.D. (in press). “Waiting for Life to happen”; History of the Regression Discontinuity Design in Psychology, Statistics and Economics. Journal of Econometrics.

Cook, T.D., & Campbell, D.T. (1979). Quasi-Experimentation: Design and Analysis Issues for Field Settings. Chicago: Rand-McNally.

Cook, T.D., Cooper, H., Cordray, D.S., Hartmann, H., Hedges, L.V., Light, R.J., Louis, T.A., & Mosteller, F. (Eds.). (1992). Meta-analysis for explanation: A casebook. New York: Russell Sage Foundation.

Cook, T.D., & Foray, D. (in press). Building the Capacity to Experiment in Schools: A Case Study of the Institute of Educational Sciences in the U.S. Department of Education. Economics of Innovation and New Technology.

Cook, T.D., Shadish, W.J., & Wong, V.C. (2007). When Non-experimental and Experimental Effect Size Estimates do and do not differ: A Review of the Within-Study Comparisons Literature. Institute for Policy Research, Northwestern University, Evanston, Ill.

Cronbach, L.J. (1982). Designing Evaluations of Educational and Social Programs. San Francisco: Jossey Bass.

Ehri, L., Nunes, S., Stahl, S., & Willows, D. (2001). Systematic phonics instruction helps children learn to read: Evidence from the National Reading Panel’s meta-analysis. Review of Educational Research, 3, 393–447.

Ehri, L., Nunes, S., Willows, D., Schuster, B., Yaghoub-Zadeh, Z., & Shanahan, T. (2001). Phonemic awareness instruction helps children learn to read: Evidence from the National Reading Panel’s meta-analysis. Reading Research Quarterly, 36, 250–287.

Glazerman, S., Levy, D. M., & Myers, D. (2003). Nonexperimental versus Experimental Estimates of Earnings Impacts. The Annals of the American Academy, 589, 63-93.

Gorard, S. (2001) Quantitative Methods in Educational Research: The role of numbers made easy, London: Continuum.

Gorard, S. (2002a) Ethics and equity: pursuing the perspective of non-participants, Social Research Update, 39, 1-4.

Gorard, S. (2002b) The role of causal models in education as a social science, Evaluation and Research in Education, 16, 1, 51-65.

Gorard, S. (2002c) Fostering scepticism: the importance of warranting claims, Evaluation and Research in Education, 16, 3, 136-149.

Gorard, S. (2003a) Quantitative methods in social science: the role of numbers made easy, London: Continuum.

Gorard, S. (2003b) Understanding probabilities and re-considering traditional research methods training, Sociological Research Online, 8,1, 12 pages.

Gorard, S. (2004a) Scepticism or clericalism? Theory as a barrier to combining methods, Journal of Educational Enquiry, 5, 1, 1-21.

Gorard, S. (2004b) Three abuses of ‘theory’: an engagement with Nash, Journal of Educational Enquiry, 5, 2, 19-29.

Gorard, S. (2005) Current contexts for research in educational leadership and management, Educational Management Administration and Leadership, 33, 2, 155-164.

Gorard, S. (2006a) Using everyday numbers effectively in research, London: Continuum.

Gorard, S. (2006b) Towards a judgement-based statistical analysis, British Journal of Sociology of Education, 27, 1, 67-80.

Gorard, S. and Fitz, J. (2006) What counts as evidence in the school choice debate?, British Educational Research Journal, 32, 6, 797-816.

Gorard, S., Rushforth, K. and Taylor, C. (2004) Is there a shortage of quantitative work in education research?, Oxford Review of Education, 30, 3, 371-395.

Gorard, S., with Taylor, C. (2004) Combining methods in educational and social research, London: Open University Press.

Hahn, J., Todd, P., & VanderKlaauw, W. (2001). Identification and estimation of treatment effects with a regression-discontinuity design. Econometrica, 69(1), 201-209.

Heckman, J.J. (1979). Sample selection bias as a specification error. Econometrica, 47, 153-161.

Jencks, C. In F. Mosteller & D.P. Moynihan (Eds.), On equality of educational opportunity. New York: Random House.

LaLonde, R. (1986). Evaluating the Econometric Evaluations of Training with Experimental Data. The American Economic Review, 76(4), 604-620.

Rosenbaum, P., & Rubin, D.B. (1984). Reducing bias is observational studies using subclassification on the propensity score. Journal of the American Statistical Association, 79, 516-524.

Rubin, D. B. (1977). Assignment to Treatment Group on the Basis of a Covariate. Journal of Educational Statistics, 2(1), 1-26.

Shadish, W. R., Cook, T. D., & Campbell, D. T. (2002). Experimental Quasi-Experimental Designs for Generalized Causal Inference. Boston: Houghton Mifflin Company.

Shadish, W.R., Luellen, J.K. & Clark, M.H. Propensity scores and quasi-experiments: A testimony to the practical side of Less Sechrest. In R. R.Boootzin (Ed.). Measurement, Methods and Evaluation. Washington, D.C.: American Psychological Association Press.

Trochim, W.M.K. (1984) Research Design for Evaluation. Beverly Hills, Ca.: Sage Publications.

Additional Information:

'This is an electronic post-print version of an article published in International Journal of Research and Method in Education Vol. 30, No. 3 (2007): 307-323.

Keywords:education, social justice, education and social justice, education research, quality, methodology, research methods
Subjects:LB Theory and practice of education
H Social Sciences (General)
L Education (General)
Institution:University of Birmingham, Northwestern University
Copyright Holders:Taylor & Francis
ID Code:599
Local Holdings:
Export Reference As : ASCII + BibTeX + Dublin Core + EndNote + HTML + METS + MODS + OpenURL Object + Reference Manager + Refer + RefWorks
Share this item :
QR Code for this page

Repository Staff Only: item control page