Read the complete Position Statement - Dibels
Transcription
Read the complete Position Statement - Dibels
DIBELS Has Not Been Validated for Use to Evaluate Teachers or Principals The DIBELS measures were designed and have been validated as universal screening and progress monitoring tools. Many schools have used these measures to assist in making individual student, classroom, grade-‐level, and schoolwide instructional decisions that have improved learning outcomes for students. Recently, some for-‐profit companies have begun promoting the use of the DIBELS measures to evaluate teachers and principals. Our position is that this approach represents an inappropriate use of DIBELS and undermines the basis on which these types of measures were developed. Furthermore, there is no rigorous research supporting the use of these measures in this way. In fact, many other agencies have already clearly indicated that the DIBELS measures should not be used in this manner, including: • • • The Florida Center for Reading Research o http://www.fcrr.org/assessment/pdf/faqs/dibels_faqs_052005.pdf , FAQ #86 RAND Corporation o http://www.rand.org/content/dam/rand/pubs/technical_reports/2010/RAND_TR 917.pdf "... measurement experts often express concerns about attaching high stakes to such diagnostic assessments as the DIBELS and Gates-‐MacGinitie because the assessments are designed to inform rather than evaluate instruction (see, for example, AERA, APA, & NCME, 1999)." Dynamic Measurement Group o http://dibels.org/papers/Myths_0208.pdf “At a systems level, DIBELS were not intended to be used to evaluate individual teachers or be used for other systems-‐ level high-‐stakes decisions, such as funding (Kaminski & Good, 1996).” o http://dibels.org/papers/Accountability_1207.pdf “It has never been the intention of the developers of DIBELS that the data be used to evaluate individual teachers or be used for other high-‐stakes decisions, such as funding (Good et al., 2004).” o “DIBELS were designed for use in identifying children experiencing difficulty in the acquisition of basic early literacy skills in order to provide support early and to prevent the occurrence of later reading difficulties” (Kaminski, Cummings, Powell-‐ Smith, & Good, 2008, p. 1181). While we agree that the education field needs better measures and methods to improve accountability and teacher evaluation procedures, we are concerned that dibels.uoregon.edu © University of Oregon Center on Teaching and Learning. All rights reserved. Revision Date: April 3, 2013 companies are using DIBELS in ways that are not based on the available science and that are potentially reckless. Assessment tools should be used for the purposes for which they were validated (AERA, APA, & NCME, 1999) and our view is that teachers should not be evaluated with materials that do not meet the Standards for Educational and Psychological Testing (http://teststandards.org/). We at the Center on Teaching and Learning (CTL) at the University of Oregon will continue to research and revise the measures, as well as collaborate with other researchers, to improve educational decision making and communicate those findings to the field when we have confidence in the data. Might there be a role for measures like DIBELS to play in estimating student growth related to teacher performance and other school-‐level factors? Possibly. But we can emphatically say that there is currently no solid scientific research supporting this process. References American Educational Research Association (AERA), American Psychological Association (APA), and National Council on Measurement in Education (NCME). (1999). Standards for Educational and Psychological Testing. Washington, DC: Author. Good, R. H., Kaminski, R. A., Shinn, M., Bratten, J., Shinn, M., Laimon, L., Smith, S., & Flindt, N. (2004). Technical Adequacy and Decision Making Utility of DIBELS (Technical Report No. 7). Eugene, OR: University of Oregon. Available: https://dibels.uoregon.edu/research/techreports/#dibels Kaminski, R., Cummings, K. D., Powell-‐Smith, K. A., Good, R. H. (2008). Best Practices in Using Dynamic Indicators of Basic Early Literacy Skills for Formative Assessment and Evaluation. In A. Thomas & J. Grimes (Eds.), Best Practices in School Psychology V (p.1181-‐1203). Bethesda, MD: National Association of School Psychologists. Kaminski, R. & Good, R. (1996). Toward a technology for assessing basic early literacy skills. School Psychology Review, 25(2), 215-‐227. dibels.uoregon.edu © University of Oregon Center on Teaching and Learning. All rights reserved. Revision Date: April 3, 2013