top of page

DATA COLLECTION

T-TEST

In order to determine the success of the study, each student read two pages from guided readers aloud to me every week, took part in a weekly curriculum test, and was given a quantitative t-test. To identify the students’ reading level for the journey of my study, the students were given a pretest measuring their self-corrections, accuracy, and a comprehension conversation. As the students read, I measured the percent of words they read correctly and the number of self-corrections they made. The students received a mark if they incorrectly read, omitted, or substituted a word or needed to be told a word. The words intended to be read would be divided by the number of words read correctly to find a percentage. The accuracy was expressed as a percentage and was calculated by the following equation: (total words read - total errors)/total words read x 100. Self-corrections represented the number of errors students corrected while reading. These self-corrections can act as a sign of deeper comprehension because students recognize what they read did not make sense in the passage. The passages were always a cold read, which means that it was the first time the student read the text. Each comprehension conversation involved an individual score for three comprehension question types: within the text, beyond the text, and about the text. Within the text questions are questions that could be answered and referred to directly in the text. Beyond the text questions are questions that asked the reader to create comparisons and connections from what they read. Finally, about the text questions asked the students to consider the author’s purpose and consider why decisions were made in the text. Each of these questioning types were scored on a level from zero to three. A zero represented that the student had no understanding where a three represented excellent understanding. An extra point was awarded to students that had a deeper understanding of the text. Fountas and Pinnell Benchmark Assessment System provided both a non-fiction and fiction text. Students were given the opposite genre of passage between the pretest and post test to ensure that students were being assessed on their overall comprehension ability. The passages and questions were provided by Fountas and Pinnell. In order to determine the student’s reading level, students had to meet the requirements for both accuracy and comprehension.

TARGETED QUESTIONING

Each weekly passage corresponded with the students’ individual reading level. After the students would cold read a passage, the students were then asked a series of six comprehension questions based on the passage they read. Each set of questions consisted of three different formats: within the text, beyond the text, and about the text. These three question types invited students to think deeper about a text. Each of the six questions had a preplanned answer used to score the students’ responses and were designed in the order of within, about, and beyond the text questions. Since there was two of each type, I scored the students on the response I preplanned. The more exact response students gave, the higher the score would be according to the level of questions. Since each questioning type was given out of three points, the overall comprehension score was given out of nine points for my weekly collection. the student repeated this process every week for a total of six weeks. Although the t-test was scored out of ten points, I did not reward the students an extra point for extension of understanding during weekly data collection.

STORYTOWN TESTS

Finally, students were scored weekly during Storytown tests. All Storytown tests are centered around vocabulary and comprehension over a weekly story that was read to the students. Each test was composed of a variety of types of questioning for comprehension and a written comprehension question. The test also included vocabulary, which I excluded from the data. The most important part of my study was to see how students would answer the preplanned questions. These test scores allowed me to see if there was a relationship between the skills from guided reading and reading passages. For each test, students were allowed to use their books for the comprehension portion. The students had the ability to reread the passage and use it to answer questions. Since students were able to use their books, they often did well on within the text questions. The test often provided chronological questions about what happens after something else in the story.


Since my district requires me to distribute the weekly assessment, I included the data because it correlated with their comprehension. Unfortunately, the curriculum was quite outdated in comparison to the standards required to teach the students. The copyright date of the curriculum was 2008. Storytown’s questions were frequently difficult for the students to understand. I choose to look at only the weekly comprehension score to compare the ideas from both to see if there was a connection and transfer of skills regarding reading comprehension.

Data: Files

How was assessment information utilized to inform instructional decisions?

The weekly collection of data was chosen in order to ensure that students were reading passages at an appropriate level to meet their needs. Although the students were tested at their level at the beginning, I wanted to meet the needs of each student in the classroom and provide an instructional leveled text for the students. In return, I needed to ensure that students’ accuracy and comprehension were suited for their group’s level. Without accuracy and self-corrections, it would have been difficult to gauge student levels on comprehension alone. If the students were scored at a reading level that was too difficult or easy, I would determine what the best decision was for that particular student. The Fountas & Pinnell Benchmark Assessment System and Storytown tests were both assessment processes the students have adapted to. The new assessment students were exposed to consisted of them reading aloud to me while I tracked their responses on a sheet of paper. Finally, I ensured that I benchmarked the students in a quiet room to best meet the population of the students. The students in my classroom work best when they are working one-on-one with the teacher in a focused atmosphere.


As the weeks progressed, I used the information I gained the study to inform my instructional decisions. If I noticed that a number of or majority of students were struggling with a particular skill, I would reteach that skill to the whole class and ask targeted questions focused on that skill. For example, I retaught cause and effect to the whole class when the students could not identify the meaning or connection between them during targeted questioning. Then, I reassessed the students during my weekly collection to analyze the progress. I also would use the results of their comprehension conversations to focus on strategies to answer a specific question type. I recognized that the questions I asked during Storytown instruction were mainly within the text questions. In order to give students more exposure, I asked more about the text questions and beyond the text questions when I read the story aloud to the class. This would ensure that even if students did not have a particular question type during guided reading they would practice the types in whole group instruction.

Data: About Me

DATA ANALYSIS

During the pretest and post test, students were scored on the three question types: within the text, beyond the text, and about the text. Students could earn a maximum of three points for each question type. At the beginning of the study, students struggled with about the text questioning the most out of the three questioning types and were most successful with within the text questions. As the study continued, the order of succession of questions did not change, and within the text questions remained the students' most successful questioning type. However, the gap between within the text questions and about the text questions began to close. Student success with all question types increased, and the average student scored two or above. The largest increase in score was about the text questions. When I think about the large increase in the scores, I believe it was due to the exposure of about the text questions for two of my groups. The two highest leveled reading groups, which consisted of 10 of my 19 students, spent most of their time practicing about the text questioning and discussing ways to answer questions of a higher level thinking. The students also had the largest area to grow in this category. The within the text questions were already scored at a high level at the beginning of the study, which did not leave as much room for growth. I also acknowledged that students had more exposure to within the text questions prior to the study, and they had not been frequently asked higher level reading questions. With that knowledge, I wanted to help students understand the way that they can think about the differing comprehension questions and strategies to approach them.

The comparison of the Fountas & Pinnell pretest and post test show how students have changed throughout the course of the study. My students began the study with a high level of accuracy. Decoding words had never been difficult for my students, but comprehension had always been an area for improvement. During the study, the comprehension scores at the same level of text went up almost twenty percent on average. The accuracy increased, but only by one percent. Due to the fact that the accuracy was a high score since the beginning, the students gained six extra weeks of exposure to texts at their instructional level for comprehension. With the practice, students were able to read at an accuracy rate of one percent higher. The comprehension score raised significantly due to the fact that students practiced answering questions at the different levels and knew how to formulate a well thought out answer. When the students developed their ideas, they knew to reread the passage and use the skills to analyze the question.


Each group was composed of no more than five students during the course of the study. The students were grouped according to their reading levels. Students A and B were in the group that did not reach reading expectations according to Fountas and Pinnell Instructional Level Expectations. Student C was in a group that was approaching level. Student D and E were in the same group that was meeting grade level expectations. Student F and G were in a group together that exceeded reading level expectations. Each week, students were given passages at their instructional reading level. Unfortunately, no week was a full week of instruction due to snow days and holidays. All weeks of the conducted research were either four days or shorter. I believe that the growth would have been more consistent if the students would have had a full week of instruction. Although the students were not consistent with their growth every week, they remained fairly steady throughout the study. According to the results above, group three made the most improvement throughout the study. With targeted questioning, the students received more individualized attention for instruction, which allowed them to feel comfortable discussing their answers and elaborating on them. Previously the students were concerned with how their peers answered. Before conducting action research, the students read as a group and would answer questions as a group. If the students felt they were wrong, they were reluctant to share and would pass the answer onto someone else. Allowing the students to read with me one-on-one gave them more confidence to elaborate on answers, which resulted in better comprehension. They did not have other students to impress, and I could push them to think outside of their normal comfort zone. I also assured the students that I wanted to see them succeed by praising them for the right answers and the hard work they put forth. Since it could be more directed towards one student, I was able to customize my feedback to help them succeed.


The only student who frequently wavered in results was Student D. The reason that this student varied in results was likely because he was a truant student. Throughout the 21 conducted research days, he was only at school for 16 of them. With this in mind, Student D frequently missed lessons that could have assisted comprehension skills, such as week four. Other students struggled with some genres more than others. Each week covered a different type of story: fairytales, folktales, mystery, realistic fiction, dairy, and fantasy. However, all genres were fiction. One reason that Student A did not remain at her highest level because she was post tested with a nonfiction text. Although the student practiced targeted questions for comprehension, she did not get as much exposure to nonfiction text and its features. This led me to believe that the questioning type was not as understood for this student. She also did not tend to read nonfiction texts for enjoyment. Most books she read were fiction because she liked them better. If the student was tested again with a fiction text at the same level, I believe she would have been more successful in her comprehension score. Similar to Student A, Student C often reads mysteries for enjoyment. Since week 3 was mysteries, I believe she was more engaged with the text, which caused higher comprehension score. She was excited about the text, and she made more connections between the text and other stories she had read.

Storytown Comprehension Tests

The graph above shows the average  scores for the Storytown comprehension tests. Each student took the assessment weekly, typically on Friday. However, the average test results remained at 78% in comparison to quarter two. Due to the number of snow days, holidays, and in-service days, there was no full week of learning for the students. Usually, the students got two days of exposure to both the story and the vocabulary. When the students had multiple exposures to the story, they were able to think about it deeper and comprehend it better. I believe the decrease in scores did not directly correlate with the students' inability to comprehend the text, but that they needed direct instruction with the questions and the story. If the students had a full week with the story, I believe they would have had a higher success rate.

Data: Projects
Data: Gallery

According to the graphs above, Students C through Student G all advanced their scores from instructional to independent. The students started on an independent level that they could read and comprehend easily. The instructional level was challenging, but obtainable for the students to grow. Each student moved to a higher ability for the level they were on, and all but two students went up one reading level. By the end of the study, five of the seven students’ original instructional levels became independent. Books at the reading level provided at the beginning of the study were now easy for the students, and they were ready to move to the next level. Student A began at a level that was hard for her. The text was too difficult for her to confidently comprehend what she read. With this, her score implied that she needed to move to a level below. In comparison to Student A, Student B did not only struggle with comprehension but decoding as well. At the beginning of the study, the student started at a level that was hard due to the accuracy level. As time progressed, she approached a higher accuracy and comprehension, to the point where the text she started with became her instructional level. She also was able to indicate more self-corrections as she read the text, which shows a higher level of accuracy and self-monitoring. The rest of the students maintained a high accuracy and increased their comprehension scores. Although their self-corrections went down or remained stagnant, they showed that the instructional-leveled text had become too easy for them by their comprehension and accuracy scores. This led me to believe that the study increased their overall comprehension.

Data: About Me

Triangulation

Throughout the study, it is clear that my t-test and my weekly targeted questioning confirm and enrich one another. As the weeks progressed, my students’ average comprehension conversation scores increased, which showed a higher level of comprehension within my study. Although the students did not always improve in scores from week to week, the t-test showed that they did make improvements overall by increasing their post test scores in comparison to their pretest scores. Often, I believe that data could vary in relation to the students’ mindset about reading and general life events and attitudes. In result, some weeks determined higher overall scores in comparison to others. However, Storytown comprehension tests refuted my hypothesis. I wonder if the reasoning was due to the lack of days provided in the curricular schedule before testing the students. I also believe that the genre of text could affect the students' interest and understanding of the overall content. In all, I believe that comprehension scores would have been higher in all three testing types if the students were given a full five day week of instruction over all six weeks of data collection.

Data: Welcome
bottom of page