Grading, mechanics and workflow
One of my main goals behind focusing so much on the different elements of all the areas that make up a podcast was to help students find a way into the material as well as organizational possibilities. Another reason is that I wanted student to develop the criteria by which they would be evaluated.
During the penultimate week of classes, as a way to reinforce what they were doing while creating their podcast, I started asking them about all the different elements and what we should be looking at to determine the quality of a podcast. As they threw out different elements, I typed and organized their responses within a Word document displayed for the class to see. The more I typed, the more clear that there were distinct areas areas within the outline. Students started debating over which areas some elements belonged to. When they were satisfied they had all of the criteria in all of the right places, we read through it one last time. They asked me what I thought and I said, “It looks good.” That’s when I announced that I was going to convert the document to a rubric and use that to grade their final project. I also announced that my assessment would only be one among the entire class’s assessments. That is, they were going to evaluate each other’s projects based on the criteria they themselves developed to determine their projects’ grade.
To help make my life “easier” through automation, I used Google form to create a rubric with all of the criteria, and added a ranking scale along with descriptions for how well the elements worked in their projects. Here is a pdf of the design view to give you an idea of what my students saw. To see how it actually works, you can go here and try filling it out. The color-coded tables containing the description for the ratings is actually an image. I first created this form in Excel and used Techsmith’s Snagit application to take snapshots of each row (but I could have just as easily used Microsoft Window’s built in app, the Snipping Tool). I uploaded the appropriate images for each “question” in Google Forms’ design mode.
I air quoted “easier” at the start of the previous paragraph, because while it was easy (for me), the process has many steps to finally compile everything into a report for which I could then send a copy back to each group containing the (anonymized) individual assessments along with an overall score for the project’s grade. To do this, in Google Forms, you can download a form’s responses as an Excel csv file. From there, it was easy to see all the raw data. From there, though, I copied the data to separate worksheets for each group, randomizing the sorting as well as removing any authorial information. This also include Bonne’s and my responses as well. I did place my responses in the first row of the reports so that students knew what I had to say. From there, I created a separate file for each group’s work, formatted everything. then uploaded the feedback within Canvas (our content management system for classes). Here’s an example report of one group’s assessments. You may notice that the Endings section is blank for everyone but me–for some reason I left that part out in the original form (I’ve since corrected it). You may also noticed that students tend to judge their fellow students much more harshly. While my own grade for the project is averaged in along with the rest, I made it clear to the class that I reserve the right to override a project’s final grades in case students go astray. In the past, some groups tried something creative that was very unorthodox but still worked well–that is, students sometimes only see how projects didn’t fit exactly into how we had been discussing different elements in class. But for all that, for all the years I’ve been doing this, I’ve only had to override one project’s final evaluation grade.