College of Education & Human Development statement on national report
Dear members of the College of Education & Human Development community,
As reported in the national press and the Atlanta Journal-Constitution, another group has released its opinions on the quality of teacher preparation programs. A new organization, the National Council on Teacher Quality (NCTQ), produced ratings of teacher preparation programs around the country. We join with colleagues such as Linda Darling-Hammond, organizations such as AACTE and CAEP, and publications such as Education Week, in expressing concern about this organization’s outdated and haphazard methodology, and their therefore questionable conclusions. As NCTQ acknowledges, they based their ratings on incomplete data nationally, and incomplete data from individual institutions. As stated on their website, fewer than one percent of programs in the country “fully cooperated” with their study. Data on the programs at Georgia universities and colleges were obtained through an open-records proceeding.
A list of concerns about the development, analysis and publication of this report is very long. Some of the primary concerns relate to the following:
Input data. The NCTQ method of data collection and analysis was a nonsystematic paper review of random course syllabi, readings and student teaching handbooks. These documents were subjectively read to see if they contained their list of preferred texts and activities. Missing from their checklist is any determination of the training and expertise of the university faculty or of the instruction they are providing; nor any observational measure of the quality of instruction being provided by the would-be classroom teacher. Evaluation of the effectiveness of teacher education programs moved away from this type of analysis of input content to objective performance-based assessments two decades ago.
Scope of documents requested provided an insufficient basis for the claims and judgments made. The documents collected represented an incomplete view of the programs and therefore led to statements that could not be substantiated by fact or practice – nonetheless judgments were rendered and published. For example, here at Georgia State University, our early childhood education program was found to be lacking in the content preparation needed for elementary mathematics. NCTQ did not request programs of study outlining program requirements, nor mathematics course syllabi. In fact, our students have a total of 18 credit hours of mathematics with three upper- division courses focusing on geometry, statistics and algebra; this is six credit hours more than NCTQ indicates should be the standard. Additionally, our students have field-based placements in urban classrooms where they work with teachers and children while they are taking their math methods courses. This example of judgments based on incomplete data is repeated in addressing our content areas of reading, science, history and the fine arts. Similar experiences are being reported in other states and other universities. These demonstrate a lack of concern for accuracy on the part of the NCTQ and the products they produce.
Reliability of coding is questionable. For some standards, the coding does not provide an accurate analysis of the document’s content. For example, our early childhood student teaching experience was not given credit for the fact that by providing seven cycles of observation and feedback, it exceeds the NCTQ standard. This type of error results from the fact that their coding of the documents was not subjected to effective forms of inter-rater reliability. This deficiency may in part explain their substantive errors.
After consulting at length with many of our faculty we have decided, as did many other colleges across the country, that we would not engage with NCTQ by responding through their portal as a means of correcting their information, corrections which they may or may not choose to acknowledge or make public. To provide accurate and up-to-date information about the quality of our teacher preparation, we have placed on the front page of our website a summary of the objective, measurable, and systematic data upon which we base our reputation, with links to more extensive displays of data. It is our intention to make this information a permanent feature accessible from the front page of our website.
As noted, we have moved far beyond the inadequate methods of evaluation represented by the NCTQ. We focus on objective measures of performance which are repeatedly employed and reviewed with students. In the future, we will further augment our evaluation system. Georgia State University faculty are part of a state-wide initiative to establish Teacher Preparation Program Effectiveness Measures (TPPEM) which will hold educator preparation programs accountable to high program standards and outcomes by applying consistent effectiveness measures across programs. According to the most recent draft of this initiative, the TPPEM potentially will base 50 percent of a program’s effectiveness rating on Georgia Teacher Evaluation measure scores (TEM) of graduates in their first and second year of teaching. Other measures of teacher preparation program effectiveness will include: (a) completers’ success at increasing student achievement during their induction period, (b) their performance on revised and more stringent state content area assessments (GACE), and (c) their performance on a nationally-normed assessment of their ability to teach in their content areas (edTPA).
Paul A. Alberto
Interim Dean and Regents’ Professor