Teachers’ experience of the feedback they receive about their practice after a classroom observation by a feedback provider.
Use this measure:
Access a copy of the BTEN Teacher Feedback Survey
Measure of...
Measurement instrument overview
This short survey asks teachers about the frequency of receiving feedback, the value of this feedback, and the relational trust they experience with one or more feedback providers.
Connection to student learning
Feedback is one of the key levers through which teachers can improve their practice and thereby enhance student learning. Feedback on instructional practice ideally occurs regularly, in a timely way, and in the context of a relationship in which the feedback provider is trusted by the teacher.i
i Hannan M., Russell J. L., Takahashi S., & Park S. (2015). Using improvement science to better support beginning teachers: The case of the building a teaching effectiveness network. Journal of Teacher Education, 66(5), 494–508. doi:10.1177/0022487115602126; Hattie, J., & Timperley, H. (2007). The power of feedback. Review of Educational Research, 77(1), 81–112.
What we know about how well this measure works for its intended use
This survey was one tool in a broad effort to use improvement science (or continuous improvement) methods to improve the feedback process for early career teachers across a network of schools and districts.ii The work was based at the Carnegie Foundation for the Advancement of Teaching, and involved the Austin Independent School District (TX) and Baltimore City Public Schools (MD) as the two core school districts using the measures. The impact of the overall effort ranged from schools where practices transformed to those where practices did not change, due to a myriad of contextual factors. At the sites where the effort made a difference, the feedback survey data played an important role.
In a separate analysis, the Austin Independent School District research office found that in schools that participated in this effort, on average, teachers had stronger levels of trust in their principals, and more positive experiences of feedback, in contrast to comparable schools that did not participate.
iiHannan, Russell, Takahashi & Park, 2015
Frequency
The survey is designed to be used once every four to eight weeks. The items prompt survey takers to reflect on feedback they have received in the past month.
Measurement routine details
The BTEN Feedback Survey was administered using Qualtrics or on paper. Qualtrics was preferred because of the automation of data entry and the survey logic that could be employed (although another online survey platform that enables survey logic could also be used). An initial question asked the survey taker to indicate their feedback provider(s) from the past month, and the survey logic was then set to prompt survey takers to reflect on each feedback provider.
The network hub of BTEN at the Carnegie Foundation for the Advancement of Teaching conducted the analytic work, designing and executing an analytic process that enabled the school district team to receive the data report in five days (with Qualtrics) or six days (with paper surveys) from the closing of the survey window. Data analysts and continuous improvement specialists from the network hub met with district leaders of participating districts to discuss the survey results, interpretations, and implications for next steps. At times, these next steps included further analyses of the data (for example, looking to see relationships across survey items that district leaders wondered about). District leaders then shared the data with school improvement teams so they could see their progress and determine next steps.
Data analysis details
School level and district level reports were created, displaying data over time as each subsequent survey administration was added. Below are screenshots of the data report produced in February of one the project years for one of the participating schools/districts.
District level: Number of feedback providers for each teacher
For the district, the report included a display indicating the proportion of teachers receiving feedback and the number of different feedback providers giving that feedback. District leaders wanted to know this information to understand the level of coordination that was needed among feedback providers.
School level: Frequency of feedback by role type
An individual school report, on the other hand, included a table that indicated which role type provided feedback and with what frequency. The report tabulated responses from all teachers who took the survey at the school.
District and school level: Value of feedback
At both the district and individual school levels, graphs showed how teachers responded to the question about the value of the feedback. Stacked bar graphs showed all respondents, and line graphs showed only the teachers who responded at all time points. The line graphs were used as checks to indicate whether shifts seen in the bar graphs were a result of differing individuals in the samples or whether those shifts might indicate a change in experiences and perceptions.
District and school level: Consistency and manageableness of feedback
Stacked bar displays were also created to indicate how consistent (or contradictory) feedback was across varied feedback providers, and also how manageable (or overwhelming) the feedback was. The figures below show data from a school display. Comparable graphs were produced districtwide.
Conditions that support use
- This measurement tool was used within a sustained and intensive effort to enact a Networked Improvement Community, based at the Carnegie Foundation for the Advancement of Teaching.iii This effort included the articulation of an aim statement and theory of improvement about how to improve the development and retention of early career teachers, and change ideas that were tested from a small to wider scale over the course of three years by educators in two school districts. The BTEN Feedback Survey was informed by this theory of improvement.
- The learning structures and routines were essential to the uptake and use of this measurement tool. District leaders met among themselves and with external partners and continuous improvement advisors about the data, and then met with individual school improvement teams to review the data and consider next steps.
- It was important to network leaders that teachers feel appreciated for the time they spent taking the survey. In one district setting, teachers were provided an honorarium for completing the survey at several time points over the year. In another district, teachers took the surveys during professional meeting times during which they were provided with refreshments.
- This survey was used in conjunction with other data that captured the frequency of feedback conversations, with the understanding that the regularity of feedback is an important part of teachers’ growth.
iiiBryk, A. S., Gomez, L. M., Grunow, A., & LeMahieu, P. G. (2015). Learning to improve: How America’s schools can get better at getting better. Harvard Education Press.
Challenges
- Any survey adds an additional burden of time for survey takers. While this survey was designed to be short and completed in a few minutes or less, finding the time and space to ensure teachers’ attention and high response rates was not easy. As noted above, an honorarium and refreshments were some of the tools employed to encourage survey completion.
- Teachers may not precisely remember the number of feedback conversations they have had in the past month. To provide a more accurate perception of feedback frequency, the network developed an online observation and feedback tool that feedback providers used. This system allowed the network to accurately track how frequently teachers were receiving feedback on their practice (the goal was once every two weeks).
Interviewees:
Sola Takahashi, Senior Research Associate, WestEd
Add a Comment