By Thomas M. Van Soelen

A team of kindergarten teachers in rural Georgia dig deeply into student work and standards to build a common understanding of writing expectations.

ELA-CC1606_VanSoelenAs the kindergarten team left the room, I pondered their query: “How can we measure our students’ Common Core writing progress when the district writing rubric isn’t that helpful?” Answering that question by devising a solution would benefit these educators, their students, and even other schools that patterned their early-grades instruction with the template we created.

Billy Heaton, then principal of High Shoals Elementary School in Bishop, Ga., believed in the power of data teams, which assembled and analyzed information about student performance. Over the years, he had studied such teams deeply and spent much time training his staff on the process, slowly working the concept into the fabric of High Shoals. The kindergarten team at this rural school participated in the data collection and analysis but hesitated to use anything except clear-cut, quantitative (often mathematics) scores to create flexible groupings of students for instruction.

But when Measures of Academic Progress scores from fall 2013 indicated writing was the weakest area for the new crop of kindergarten students, the kindergarten teachers were united in their desire to address this area and decided to use the data team cycle to chronicle their work.

District leaders created their own rubric for kindergarten after deeming the Georgia Common Core reporting standards too minimal. However, as the High Shoals kindergarten team considered the district rubric, they were dissatisfied on two counts: The rubric appeared to prioritize mechanics more than ideas, and the rigor did not reach the higher levels of writing historically achieved in their kindergarten classrooms. The team reached these conclusions after different teachers examined and rated student work multiple times by using the rubric.

For Heaton, the district rubric represented a larger problem: It was designed for adult use, not students. His reading of Arter and Chappuis (2013) and Hattie (2008) convinced him that student self-assessment and related goal setting produced higher student learning and are integral to high-performance cultures. Additionally, having teachers simply look at data about student learning was inadequate because it was not leading to instructional changes.

That meant the kindergarten team would have to create something accessible to 5- and 6-year-olds and useful for their teachers.

Getting it going

To begin, each of the five kindergarten teachers gave students a cold writing prompt: “My family . . .” The young writers had 10 minutes to draw a picture and 20 minutes to write. Once they collected the papers, teachers made multiple copies of each student’s writing for their data discussion, enough for each teacher to assess them all. They obscured names, replacing them with alphabetic identifiers (such as A, B, C, etc.). Next, they ranked each of the 23 writing samples instead of rating them. This meant each teacher would create a pile of writing from most skilled to least skilled. Each writing sample had to include at least one sticky note that said which writing characteristics made it more skilled than the one ranked directly beneath.

Ranking provided several important outcomes for this collaborative team. Although they had previously shared student work, the district rubric had permeated their conversations, stunting dialogue about the finer points of student writing and how to foster those writing behaviors.

The district rubric also provided cover in their conversations with each other. With a four-point scale, two teachers could easily to be on opposite edges of a single scoring cell. For example, a paper could be rated “developing” but still be close to being “proficient”; another writing sample also could be deemed “developing” but would be close to “does not meet” the standards. The rubric also inadvertently created a practice where the students who scored on the margins were grouped together due to their similar score of “2.”

Although the team knew that ranking by individual teachers would improve the quality of conversation, they underestimated how much this would change relationships, teaching practices, and the students themselves.

VanSoelen_Image1

Deepening relationships

The kindergarten team wanted to create a process in the spirit of protocols like those developed by the School Reform Initiative. The structured conversation protocol that the team developed provided a platform for honest and direct evaluation. Teachers posted their individual sequencing of the writing samples on a matrix. As each teacher indicated their order, there were occasional bursts of laughing; never had it been so clear that teachers weren’t on the same page. Thank goodness a few samples fell in similar locations along each teacher’s achievement trajectory. The team needed just one similarity to leverage.

Working with the teachers as an external coach, I made notes on the master set of writing samples as they described writing features. The teachers tried to achieve consensus about the comments they would make on each writing sample, a process we called annotating through consensus. Although it took some time for the first sample, progress began to accelerate. These kindergarten teachers’ relationships began to strengthen in these risky conversations in which each teacher was accountable to the group.

Choosing another writing sample where some rankers may have thought similarly was the next step. After annotating this sample through consensus, the next question proved harder: “Where does this writer fall compared to the first writer?” The risk grew as teachers offered their rationale for what made quality writing. Some said using proper conventions was important; others lobbied for prioritizing the author’s ideas. Sharing assumptions while creating this product was at times stunning. In one instance, nearly every teacher placed a specific student near the lower end of the writing continuum. They agreed on the annotated comments they wanted to make about the student’s work but stumbled about where to place him in relation to other students. As they compared his work, they kept moving him toward the most skilled side until he was almost near the top — despite numerous conventions errors. “So, we are saying Writer M is more skilled than Writer F because his organization is clear and word choice is interesting. Is that right?” These conversations enabled teachers to raise many questions that might not have been asked or answered if teachers worked alone.

Every voice counted in this process. The displayed matrix created accountability, and the team’s agreed-upon definition of consensus (“I can live with it”) required an opinion. Never had the team operated in such a democratic way.

As the master set of writing samples grew and progress was visible, the team’s excitement bloomed as they saw a very useful tool coming to life. Inevitably they got stuck on some students; that was when the utility of the tool became a way out: “So is Writer D in the same instructional group as Writer R? Why would that be so?” Or, “Are we saying this writer is exhibiting exactly the same behaviors as Writer K? Can we, in good conscience, say these two writers are of equal quality — can we ‘pile” them?”

The “piling” happened more often than anticipated, and teachers wondered if that was a function of the age and developmental level of the writers. The final continuum articulated 11 different writing levels. In future iterations with older writers in higher grades, piling occurred less often, that is, the spread of the writers seemed to widen.

Pride in each other and their work was the most significant outcome for this team. “There’s no way we are walking away from it,” said one teacher. Said another, “Something new might come in, but we will make that work with what we know here.”

VanSoelen_Sidebar_1Changing teaching

The team celebrated as the last writing sample was inserted into the continuum; they had built shared understandings about what constituted quality writing. “We now had dedicated time for writing. We certainly taught it in the past, but now we prioritized it,” said a teacher. Perhaps more important to their teaching practice, the team articulated their expectations as students move toward that quality. They also could use those shared expectations across all of their classrooms.

As teachers pondered how to make their work explicit and visible with students, they pictured each writing sample in a plastic page protector with descriptors of the annotations in student-friendly language. They said they could even picture students near the samples, self-assessing. They weren’t sure yet what to call it — a writing continuum or a visual rubric, so they vacillated between both.

As their launch date grew near, they again looked back at their continuum one last time. The team decided to consolidate a few writing samples and ended up with seven distinct writing samples. They wanted students to have some tangible connection to the continuum, and the number seven translated well to the seven colors of the rainbow. But even that image was revised through their discussion and deepening understanding of this work. The first rendering placed the most needy writers in the red area, the most skilled writer in the violet section. The team discussed the educational connotation with red, as even the tracking form used for the data teaming cycle automatically colored lower-achieving students in red. The team chose to flip the colors and make red the most skilled writer.

The kindergarten team began using the rainbow during the 2014-15 school year, using the previous year’s students’ work as their original anchor papers. Students quickly took to the rainbow as teachers used it more and more. “Students were excited to write,” said one teacher. “In previous years, it had been a chore.” Instead of coaching a young writer to write more, teachers and students used the rainbow to identify the next steps in their writing.

From using the rainbow to analyze a shared writing as the whole class to using a rainbow sticker or stamper in individual student conferences, the rainbow pervaded kindergarten rooms. All team members agreed to post the writing samples on boards in a location where students could use them. The board was a highlight of many classrooms, as evidenced by these comments from teachers:

  • “Students felt big and smart.”
  • “They would collaborate with each other at the board, pointing at the rainbow and identifying for each other what writing behaviors were needed to advance to the next color.”
  • “Some of mine would bump others out of the way! It was fun for them to self-assess.”

Some team members used the writing rainbow to introduce student-led conferences. Another videotaped each student talking about his or her writing with the rainbow and sent the video to families. One of the videos articulates the kind of thinking and progress these students made through Common Core-informed instruction: http://bit.ly/1HflQms.

VanSoelen_Image2

On to data

The kindergarten teachers could visualize the rubric in action with students so their data team process was ready to move forward. After each teacher assessed her own class, she brought her data to the team to set goals. The goal centered on moving writers up one or more levels. Teachers knew then that they needed a more advanced sample, perhaps from a 1st-grade writer, to provide inspiration and a model for the highest writers in their classes. The writing continuum was invaluable for the next steps in the cycle as the ranking provided grouping strategies while annotations clearly delineated each group’s writing strengths and challenges. The data teams spent time discussing various strategies designed to move a particular writer from one group to the next. “We discovered we didn’t have many strategies for our lower-achieving students. We needed to find some,” said a teacher. As new writing samples emerged weeks later, the process repeated and student groups were redesigned on the basis of the new data. For example, a student may have been in a group working on sentence beginnings, and now was moved to a group working on interesting language. Different teachers would teach different groups.

Their next decision may only have worked because of the relationships teachers had built with each other: The team chose to group and regroup students for an extension time each day and teach another segment of writing. Armed with their collective data, students moved rooms to be with other writers who needed a similar burst of instruction.

Each decision in this process acted like a domino in their relationships, pushing them even further. “We had to really trust our colleagues,” said one teacher. “We were accountable to each other, really for the first time.” The very decision of sharing students for one small segment of the day created an urgency for the entire grade level, changing pronouns from “my” students to “our” students.

Their commitment to working this way paid off. At the end of the 2014-15 school year, 94% of High Shoals students met state standards, and 59% were in the top three of the seven categories on the writing rainbow.

Unexpected outcomes

The process developed by the kindergarten teachers led to a surprising outcome: Other grade levels voluntarily asked to engage in the same kind of work.

As often happens in schools, good work gets talked about — in the parking lot, on the playground, and over social media. Other grade levels expressed interest in their work. “For the first time in my career, I really felt like kindergarten mattered, and we were valued by the upper grades,” declared one teacher. By the end of the year, grades 1 and 2 requested protected time to develop their own continua.

“If you set goals for kids, they will accomplish them, even if they are 5- or 6-years-old,” said one of the kindergarten teachers. Attitudes about goal setting also began to shift as upper-grades teachers saw the effect of goal setting on student engagement.

The work also was reorganized outside the building. The High Shoals kindergarten team was publicly identified as a model during a district data team training session. Team members also gave two external presentations — to teacher candidates in the birth-to-5 teacher preparation program at the University of Georgia and at the AdvancED Conference in Atlanta. The team celebrated with a student teacher who was then hired on the team the next year. She used her work with the rainbow on the rigorous EdTPA assessment and received full points.

Fads and trends permeate the profession. Looking at student work is not immune from that often-unfortunate reality. Although looking at student work was and is critical for Common Core implementation, simply looking at the work was not enough for this kindergarten team to achieve the teaching and learning levels it sought. What made the difference was combining examination of student work with a meaningful protocol that balanced belief and action. Only then could teachers be just risky enough with each other to make the collaboration matter for themselves and their students.

REFERENCES  

Arter, J.A. & Chappuis, J. (2013). Classroom assessment for student learning: Doing it right — using it well. New York, NY: Pearson.

Hattie, J. (2008). Visible learning: A synthesis of over 800 meta-analyses relating to achievement. New York, NY: Routledge.

THOMAS M. VAN SOELEN (tmvansoelen@gmail.com; @tvansoelen) is president of Van Soelen & Associates, a professional development and consulting company, Lawrenceville, Ga.

© 2016 Phi Delta Kappa International. All rights reserved.