Since "peer review" is a pillar of academe, any comprehensive evaluation of teaching will include input from colleagues, especially when content expertise is required. Therefore, peer review is essential for evaluating a faculty member's professional development as well as syllabi, exams, and other aspects of instructional design and assessment. Peer review can also contribute to the evaluation of instructional delivery when the class observation methods are valid and reliable. However, Arreola (2000) argues that peers should not form "an overarching peer evaluation committee" that reviews all of the evidence from the faculty member, students, and colleagues and issues a recommendation about the faculty member's overall performance (p. 91). Not only is such a process extraordinarily time-consuming, but, Arreola points out, studies suggest that it produces unreliable results. Instead, he recommends that, in the overall evaluation process, peers (like administrators and students) should represent only one source of information and that the peers who contribute information should be the ones "who are in the best position to have first-hand knowledge of the performance in question" (pp. 90-91). For more advice, click the links on the right.
According to Nancy Chism (1999), author of Peer Review of Teaching, "most writers on peer review see review of course materials as the optimal way in which peers can be involved" (p. 42). Because of their content expertise, peer reviewers who share the instructor's specialty are more qualified than students to assess many of the qualities of content expertise, instructional design, and assessment that course materials reveal (see the table below):
|Component of Teaching||Course Materials|
|Content Expertise||lecture notes or slides, bibliographies, handouts, etc.|
|Instructional Design||syllabi, grading criteria, course goals and objectives|
|Instructional Assessment||assignments, exercises, quizzes, exams, papers, teacher's written feedback on graded papers or tests|
In his book Evaluating Faculty for Promotion and Tenure, R. Miller (1987) has published a checklist that can help you evaluate course materials in order to assess course organization, course objectives, instructional methodology, course content, homework assignments, and student learning. In his book Developing a Comprehensive Faculty Evaluation System, Arreola (2000) has also published a useful checklist and rating scale; these come from Georgia Perimeter College. Another helpful form was developed by G.F. Lazovik at the University of Pittsburgh.
Evaluating Syllabi: At Howard, you can use CETLA's Syllabus Checklist to evaluate syllabi. The checklist is based upon national best practices and guidelines issued by the Office of the Provost.
Evaluating Portfolios: If a faculty member submits a portfolio, peers should agree on standard criteria that include qualities such as completeness, clarity, variety of sources (e.g., self, students, peers), as well as the consistency between the faculty member's stated teaching philosophy and the evidence in the portfolio. You may find any of the following sources useful for evaluation:
"Teaching Portfolio Assessment." Peter Doolittle, from the ERIC Clearinghouse on Assessment and Evaluation, recommends standardizing the evaluation process as much as possible by (1) requiring certain course materials, (2) using a Likert-type scale to rate the required items, and (3) weighting categories to compute a composite score.
"Evaluating Teaching Portfolios." The University of Wisconsin's Teaching Academy poses questions to answer when evaluating teaching portfolios.
"INTASC Rubric." This rubric, designed by SUNY to meet INTASC standards for teacher-candidates at Southern Utah University, will show you how you can evaluate a portfolio against the professional standards of a discipline.
"Peer Review of Teaching Project." University of Nebraska has adopted a radically different approach to peer evaluation via portfolios, an approach that won the TIAA-CREF Theodore M. Hesburgh Award Certificate of Excellence: In teams of 2-5 members of a department or program, "faculty explore and apply peer review for documenting, promoting, and valuing the intellectual work of teaching." In the process, the teammates develop "benchmark," "course," and "inquiry" portfolios.
Although a peer's observations of a class may help faculty improve their teaching, "the cost in time and effort of gathering valid and reliable peer observations that may legitimately be used to support personnel decisions (e.g., promotion, tenure, etc.) is generally prohibitive" (Arreola, 2000, p. 97). Indeed it is: Valid and reliable peer observation requires a formal introduction to the students, a pre-observation conference with the teacher to establish the context for the observations, a post-observation conference, and multiple visits by a team of trained observers who use a standardized instrument to record their observations. The following website explains how to use the well-tested Flanders system to code observations.
"Flanders Interaction Analysis": These video clips from Nova Southeastern University will show you how to use the Flanders Interaction Analysis System to analyze the interaction in a classroom. Using Flanders' categories, you can code all verbal communication to reveal how the teacher and students initiated or responded to an idea. As a result, you will notice the balance of "teacher talk" vs. "student talk" and silence as well as the quality of expression (e.g., whether the teacher praises or criticizes a student, accepts a student's feelings or ideas, asks a question, gives a direction, or just lectures). The analysis can reveal how student-centered or teacher-centered the classroom is.
If, however, you are observing a class only to provide feedback for improvement, you may choose a simpler procedure for recording your observations. The sources below explain how to conduct a summative peer observation for evaluation vs. a formative peer observation for improvement.
"Peer Observation of Classroom Teaching." Cornell University's Center for Learning and Teaching describes procedures for "peer observation for evaluation" and "peer observation for teaching improvement."
"Preparing for Peer Observation: A Guidebook." Prepared by the Center for Teaching Effectiveness at the University of Texas at Austin, this guidebook describes procedures for both formative and summative peer observations. The Center recommends that observers use a combination of written analyses, rating scales, and checklists, and it includes sample forms in the appendices.
"Peer Review of Teaching." North Carolina State University's site outlines procedures for formative and summative peer reviews of course materials and classroom visits.
"Observing Teaching." The University of Wisconsin's Teaching Academy has posted step-by-step procedures for observing teaching. Included are forms for conducting pre-observation and post-observation conferences with the instructor as well as observation checklists for lecturing, working with student groups, and questioning students.
More classroom observation instruments:
"Classroom Observation Instruments." The Center for Teaching and Learning has posted a catalogue of worksheets, checklists, scales, and report forms for classroom observation.
"Observation Instrument." Boston University's Center for Excellence in Teaching developed this checklist for formative evaluations. The Center says the checklist serves "as a guide to observe key elements of teaching that contribute to a rich learning experience. It is divided into categories that address both form and content; not all categories will be applicable to every teaching situation. The categories include organization, presentation, rapport, content, interaction, and active learning. The instrument can be discussed at the Pre-Observation Meeting and serve as a reminder to the observers of things to consider during the observation. It may also be used during the Post-Observation Meeting to identify the instructor's strengths."
"Peer Review Instruments." This University of South Australia website includes several peer review instruments, including checklists for lecturing and distance education.
Peers may be in the best position to evaluate a faculty member's professional development as a teacher. Normally, peers will review evidence that a faculty member submits. At Howard, a faculty member can submit a CETLA "Workshop Transcript" or "Activity Log" along with a Statement of Teaching Philosophy, letters of appreciation, or a portfolio.
If a faculty member submits a Statement of Teaching Philosophy (with or without a portfolio), you can use the following rubric to evaluate it:
"Rubric for Statements of Teaching Philosophy": This rubric was developed by Matt Kaplan and his colleagues from the Center for Research on Learning and Teaching at the University of Michigan. Kaplan et al. comment, "The design of the rubric was informed by our experience with hundreds of teaching philosophies as well as surveys of search committees on what they considered successful and unsuccessful components of job applicants' teaching philosophies."
Upon reviewing other evidence of professional development, you might ask the following questions (adapted from a form from the University of Iowa's College of Education):
- Does the instructor demonstrate a commitment to continuous inquiry and life-long learning?
- Does the instructor collaborate with colleagues to improve teaching and student learning?
- Does the instructor apply what he or she has learned from professional development to improve teaching?
- Does the instructor pursue professional development to fulfill teaching goals that are aligned with the Department's student achievement goals?
As a result of ongoing professional development, a faculty member may demonstrate what Chism (1999) calls "leadership for teaching." The faculty member may publish books or articles on teaching (i.e., the Scholarship of Teaching and Learning) or develop teaching tips, guides, syllabi, and handouts for departmental use. The faculty member may also serve as a course coordinator, faculty mentor, TA supervisor, lab tutor, director of graduate or undergraduate studies, or chair of a committee on teaching, learning, or assessment. Or the faculty member may volunteer to organize workshops or "brown bag" discussion groups on teaching-related topics. Although some of these activities may count as "service," Chism (1999) argues that "there is a sense in which [they] are organic to teaching, testifying to the depth of commitment, creativity, and student focus that teachers bring to their work" (p. 100).
Braskamp, L.A., & Ory, J.C. (1994). Assessing faculty work: Enhancing individual and institutional performance. San Francisco, CA: Jossey-Bass
Centra, J.A. (2000). Evaluating the teaching portfolio: A role for colleagues. New Directions for Teaching and Learning. Vol. 83. Pages
87-83. [Use DOI 10.1002/tl.8307)
Chism, N.V.N. (2007). Peer review of teaching: A source-book. (2nd ed.) Bolton,
Chism, N.V.N. (2007). Why introducing or sustaining peer review of
teaching is so hard, and what you can do about it. The Department
Chair: A Resource for Academic Administrators, 18(2), 6-8.
Keig, L., & Waggoner, M.D. (1994). Collaborative peer review: The role of faculty in improving college
teaching: ASHE-ERIC Higher Education Report Series. San Francisco, CA:
McEnerney, K., Allen, M.J., Harding, E., & Desrochers, C. (1997).
Building community through peer observation. San Diego, CA: American
Association for Higher Education.
Miller, R. (1987). Evaluating Faculty for Promotion and Tenure. San Francisco, CA: Jossey-Bass.