Category: Education

Podcast Project in the Class Part 1: Podcasts as an English 102 Research Paper

I am still reworking and adding materials to this series of posts

Part 1: Introduction

This is an expanded discussion of a presentation I made for my English department faculty meeting at the end of 2019.

Currently, the roadmap takes this path:

  1. Introduction
  2. Working with Groups
  3. Contextualization
  4. Elements of Podcasts and Stories
  5. Interviewing
  6. The Practice Podcast
  7. The Suicide Podcast
  8. Workflow and Mechanics of Evaluations
  9. Final Thoughts
  10. Resources

It has been a while since I’ve written about my classroom experiments. A few years ago, my English 102 composition students created researched video essays. We had some really interesting results. Maybe I will eventually write about them. But today, I am going to walk through my English 102 class from last winter (2020), where instead of the traditional research essays, we created podcasts.

As my note above explains, this post is an attempt to fill in the gaps of a mere slide pack. I say, “this post” but really mean “these posts” as after having read through it/these, I realized that I’ve gone into much more detail and explanations than I would have for a slide presentation. So, as with an earlier set of posts, I’ve broken out this discussion into multiple sub-posts for readability’s sake as well as to satisfy my own tendency to beat to death horses:

The Backstory

The idea for using podcasts as research papers was born out of work for an NEH grant my dean at the time had written for creating contextualized courses for both English and Math with other disciplines–any approach was okay. I had recently started working with Bonne Smith, one of our Journalism instructors, on developing a contextualized English 102 research writing course for a newspaper course. At first, we wanted to take a community learning approach (teaching both the newspaper and the composition classes together). However, as the newspaper course requires students to interview and be accepted in order to become a member of the editing staff, it became apparent that those logistics would not work for every student. So instead, we decided to treat the composition class as a feeder course for the newspaper class, using journalism as the framework to generate student interest in the newspaper. After much brainstorming about various projects that could teach journalism skills as well as allow me the ability to still cover the principles of composition, research, documentation, and organization of information, we decode that the vehicle for this endeavor would be podcast creation. But as a feeder class, we needed the newspaper to be involved.

So Bonne and I decided that the topics for the podcast would be chosen by the editors of the college’s newspaper each quarter (pending their approval, of course). These topics would revolve around the campus and larger community issues (we’re a small community college and saw this as a opportunity to encourage our students to learn more about their college and town). The topics would be based on sets of stories and articles on which the newspaper had published during previous quarters. This not only provided the field in which my students played, but also acted as a source from where students could begin their initial research, to get a lay of the land, so to speak. Also, it showed how students just like them could produce/publish materials that would be seen by a much larger audience than their classmates.

It just so happened that the during previous quarter, the newspaper had published a series of articles regarding suicide. These articles provide an overview of other research and statistics that we used to prompt discussion for our students. We then asked the students to look for topics in the articles about which they wanted to know more; that is, to find topics they were interested in and in which they could dig deeper. At the end of the quarter, the students would be able to submit their podcast to the newspaper, and if selected, would be published on the paper’s website. This way, we could better connect the class to the college’s newspaper.

A continual source of surprise for me still is the realization that the front part of the course proved to be so much more difficult and demanding than any technological aspects, whatever that may be (audio, video, multimedia).

Because of that lesson, I’m going to spend much more time over the course of the next few posts discussing Groups, Interviews, and Medium Structure [BLAH BLAH BLAH]. While I do not plan on getting into the trenches of specific technology such as Audacity or even the Rodecaster Pro (compliements of our Library’s still developing Audio/Visual lab), I will note what we used, or at least some of my suggestions to students on what they could use. I am currently developing instructional materials for that new A/V lab (which is in desperate need of naming!). The quick explanation for why I am not going to discuss that is because I did NOT teach the specifics of any recording devices or audio editing package. I merely suggested apps and general approaches, offering my assistance as needed. Only once or twice over the years of doing projects like this has any group actually asked me for specific help. And in those cases, I googled alongside them and learned as they learned. Of course, having lots of experience with all sort of programs did help. But still, the lesson ought to be clear: trust your students’ abilities.

So why a podcast?

Like an essay, a podcast typically introduces the topic that is the subject of the episode, then supports it (using primary as well as secondary sources), then arrives at some sort of conclusion. In the movies or in books, this is commonly referred to as a beginning, middle, and end, though a podcast’s beginning and ending may be much shorter. And like an essay, some research will be required prior to interviewing anyone. That is, in order to know what kinds of questions should be asked of the interviewees (sources), the interviewers (author) will need to know more about the topic of their podcast (research paper).  But all this will stem from developing a research or focus question (The Pitch).

This is not to say that all podcasts are research projects. They obviously can be creative works such as stories, poems, plays, music, etc.. All of which require no research. I can easily imagine for an English 101 class, a professor who is teaching the narrative essay. This could make for an interesting podcast as well as learning moment to reinforce the elements of a story such as translating written transitions to audible ones. What all these modalities of essays, or any creative work, require is the ability to tell a story. Depending on the goal of the class or project, there will be different ways to tell that story. The same is true of podcasts.

However, what I typically focus on in my English 101 composition course is the argumentative essay and its elements and structure; it allows me to touch on all the other types (narrative, compare/contrast, etc.) because a well-written argument, in my experience, often includes many elements/strategies from those other modes. I personally am of the ilk who believe that all communication is a form of argument. In English 102, I continue to focus on structure and elements but switch to a medium with which students have much less (even zero) experience. I do this in order to help slow students down during the creation (writing) process to be more deliberative and reflective so that they can make more informed choices about that structure and why they use whichever elements they choose. Part of that process is choosing what not to include, which is just as important as what to include. Organizing that content is dependent upon what story is being told as well as its goal and audience—and the medium. In other words, the medium of the story isn’t a special case; as with podcasts, those dependencies are true of other mediums like videos, multimedia projects, blogs and all the rest.

The goals behind this contextulized English 102 composition class were for students to learn:

  • more about the structure of writing by using a less familiar medium (podcasts) and so, more about the possibilities and power of narratives
  • research methods, documentation and integration of sources
  • collaborative learning (group work)
  • more about journalism: interviewing and reporting, integrity and ethics
  • empathy; how to create relationships within the community and the value of primary resources; that is, not merely using people as resources for their own gain
  • more about the college’s newspaper course to increase their student enrollment

Part 1: Introduction

Sample Project, Notes from the Collaborative Classroom pt 4

Sample Project: Recipes

While the early posts on this topic were discussing the mechanics and workflow of group work, one of the main reasons I wanted to write about this experience, was also to highlight some of the outstanding projects produced by my students.

And so finally…!

The Recipe Project is one of my favorites because it requires really understanding the medium and genre of the documents involved. That understanding comes in the form of two different requirements of this particular project:  1) breaking down all the elements and concepts of recipe documents, and 2) choosing  a different genre and blending them together. Also, groups needed to produce two versions of this document where one focused on an older audience and the other on a younger one. As with all of the projects in the course, my goal is to get my students to see and work beyond conventional concepts of what constitutes a particular type of documents (in this case, a recipe).

Here is a copy of the Recipe Project Prompt.

Before they began the actual project planning files, they were to first decide as a group who their audience was and which recipe they were going to use as well as which type of document they were going to use as the source for the blended document. They prepared a relaxed “presentation” for the class so that we could ask questions and make sure they were all on the same page. In all, there were two areas with which they had difficulty. The first was with the idea of blending documents itself.  For example, the Penguins group decided they wanted to use a recipe for Lenga (beef tongue) since it was the favorite of a group member. However, during that initial class discussion, it was clear that they hadn’t decided on a source type of document; rather, they focused on their audience, but in a general way: for the adults, they would write it more “seriously” and for the younger people, it would be more “fun”. But they were confused when questioned about in what type of format (document) the recipes would show up. To be fair, this was typical during that first discussion. To help, I would ask them about their audience and why they might be wanting to show these people this recipe. That is, I was trying to get them to think about context.

One very clever group, the Hedgehogs, decided that they wanted to use a cake ball recipe in the form of the wedding invitation genre.  While maintaining the parameters of the writing prompt, they went one step further for the younger audience version by also converting the invitation/recipe into a puzzle.

An interesting thing that began to occur later in the semester, was that the groups began to consider their planning forms as part of their finished documents, and began tailoring them to echo the content and style of their finished projects, creating a sort of brand. Here are their Recipe Project Planning Forms they used.

And here are the electronic versions of both the adult and children invitations that they uploaded to Moodle:

Wedding Cake Balls – Adult Invitation

Wedding Cake Balls – Children Invitation

If you read through them, you will notice the group’s extreme attention to the details of their different genres. For example, the numbers were spelled out in the adult version because that is part of the invitation’s formality. Although it was a simple thing, they switch to numeric representation for the children’s version because the numbers would be easier to read, yet the tone, while fun, still helped maintain a sense of formality (as did the font usage–which also changed from adult to children audiences).

While the PDFs make the text easier to read, they do not do the documents justice; please browse through the gallery and enjoy the group’s creative use of form and details:

This slideshow requires JavaScript.

The Recipe Project Report shows how their fellow classmates also responded to such details.

Mechanics and Workflow of Grading Group Work, Notes from the Collaborative Classroom pt 3

Mechanics and Workflow of Grading Group Work

With any group work, and in particular with how I’ve set up the assignments and the amount, there are a lot of moving parts for any one project.

Grading and Infrastructure

There are two things to consider when setting up a system: the classroom management system (assignment uploads, etc.) and individual grading apparatus. To see some of the types of topics that would be graded, here’s a bird’s-eye view of the general grade breakdown from the course syllabus:

Grade Weight Overview

These days, many schools will have something like Moodle or Blackboard in place for communicating with students, attendance tracking, turning in assignments, grade books, quizzes, polls, etc. My university uses Moodle (I believe at the time, it was version 2.0) to help teachers manage their classrooms. Here is an snapshot of what a typical class of mine looks like on Moodle (apologies for needing to zoom in quite a bit for details):

Sample Class Moodle Page

My approach has been to begin the semester by covering many of the document elements and principles up front, and then applying those concepts to actual projects. I do this to avoid “teaching by mode” because, like freshman composition essays, rarely are documents created using one single type of element or rhetorical tool; that is, successful documents and essays usually make use of many tools and elements. However, the project topics tend to themselves create a more nuanced focus of the material previously discussed. This also helps with “learning” one topic then forgetting about it to move on to the next concept.

As you can see from this snapshot, there are a number of assignments/tasks co-occuring at any one time:

Mutliple Assignments Circus

There is a lot to do in this class and I find that by juggling this many tasks, we cover more assignments and projects than we otherwise might.  It’s also necessary if you want to evaluate all parts of any particular project’s processes. And of course you do! The volume of tasks for students also seems to balance out equitably, (my perception of) the lack of difficulty in the material itself, avoiding the “Easy ‘A’ Class” label. Although this organization seems a little…unorganized…,once you dive into it, it begins to make sense. Really, it’s just sheer textual volume that is masked as a seeming lack of organization (students and I both have to pay attention to the schedule!). Admittedly, I do keep an eye open for other tools that I could integrate with Moodle that might help alleviate that sense of overload.

Electronic versions of projects/assignments and the ability to store them without physical copies has been one of the most useful tools for managing my classes. However, even though most projects and assignments do have a digital birth, I also have students turn in hard-copies. A physical manifestation of their work gives them yet another perspective of their work, which is really most assignment’s larger goal. This works for non-digitally created work as well; that is, if it the project was born non-digitally, then students turn in digital photos or scans, again giving them another opportunity to see their work from a different perspective.

I was using electronic gradebooks in the form of spreadsheets even before my university set up that feature for our Moodle implementation. And though Moodle’s grading system is mostly a clunky system (and limited), I really do like it—especially for students’ ability to track their grades on their own and grade notifications (online and email). While most professors typically use a total point system of grading, I prefer the weighted percentage method, though the point system really is the same thing–if you know how many quizzes or other assignments you are going to use ahead of time for the entire semester. I never know how many quizzes I’m giving because I use them primarily as a check on whether or not students are doing the reading. Also, the number of projects sometimes changes depending on the speed of the class.  For details on how to set this up weighted grading on Moodle, open up this pdf copy of instructions I created for a particular professor’s American literature class: Sample Weighted Grades Setup.

One thing I really like about using Moodle or Blackboard for turning in assignments is the ability to set due dates AND times. I used to set them for 5 minutes after the start of class on the day a project was due, but later changed it to a 5 minutes before–too many groups were busy uploading projects at the last minute from the computer labs, arriving late to class. After the typical rodeo of turning in their first assignments, I’m able to address individual problems so that they become used to the upload process. After that initial run, I maintain a strict no-late-work-accepted policy. That electronic time-keeper removes the burden on me placed by students who are constantly late as well as maintains a sense of fairness to other groups who do get their work in on time.

Crowdsourced Grading

The first time I taught this course, my own comments and grades were the only written feedback about their projects that the students received. I originally thought groups taking notes of their peers’ comments during the workshop would help them be more engaged. But I found this was a missed-opportunity for another chance of engaging with their material and design choices. Also, only a few students consistently volunteered feedback during these workshops.

For my 2013 classes, I decided to try a crowdsourced grading approach advocated by Cathy Davidson in her post, “How to Crowdsource Grading” that would hopefully encourage more students to participate in giving more feedback and again, learning from their own observations. For my own brand of crowdsourced grading, I based the project grades on the average of their peers’ evaluations, folding my own rankings into the mix. While I typically find rubrics a very limiting way to grade, it quickly became obvious that I needed some sort of descriptive ranking form in order to get everyone on the same page in terms of what and how to evaluate their peers’ projects. It also ended up having the benefit of constantly reminding students of the concepts and question about which we were discussing throughout the semester. I created a 0-5 point scale rubric based on Steven Gerson’s evaluation form found within his Writing That Works A Teacher’s Guide to Technical Writing. Although my own evaluation scores were averaged along with their peers’ scores, I did reserve the right to adjust a project’s final grade if the class had for some reason missed the complexities of a particular project (which only happened a few times). For the Spring semester, I had thought to use the evaluation forms themselves as a pedagogical exercise and have the students develop their own form using the topic/ratings from my borrowed form. Instead, it was a logistical nightmare.

Grading and Technology

There are two things to consider when setting up a system: the classroom management (assignment uploads, etc.) and individual grading apparatus. To see some of the types of topics that would be graded, here’s a bird’s-eye view of the general grade breakdown from the course syllabus:

Grade Weight Overview

Notice that there are two sets of evaluations for each project: one for individuals grading group projects, and the other, for grading their group co-members’ contributions to those projects.

The generic workflow for project evaluations went like this:

  • Individual students evaluated each project and presentation.
  • I compiled this information into an Excel spreadsheet.
  • After all the tabulations, I formatted that information into a report within a copy of those worksheets.
  • I then copy the different sections of those worksheets into Word and adjust any formatting, then print that file into a PDF file (for user compatibility), which I then uploaded to each group within the grading of the assignment listed on Moodle. (I used Word because it was easier to arrange the report sections into a format that’s Page-Reading Friendly rather than trying to work with Excel’s print options.)

This meant I would start off with roughly 35 sets of evaluations for each group’s project that ranged from text files html files, in as many different layouts as there were students. For that first semester last year, I had to manually go through each form copying/pasting or retyping in many instances, the different information to my Excel spreadsheets. This also included written comments on what worked well (and why) as well as what worked less well (and why) and how it might be improved. The many different formats I received as well as number of which I could not simply copy and paste the information took entirely too many hours to compile for each project.

That Fall semester, I decided this process needed to be automated. One option was to build my own website using a SQL backend. However, while storyboarding my tables and forms, a friend gently reminded me of how much testing and time it would take (class had already begun). So instead he convinced me to look for an out-of-the-box solution.


I’ve had some experience implementing SurveyMonkey for a former employer, so I decided to try it out for our projects. It may seem odd at first, but if you think about it, a rubric or evaluation form is really just a survey form. The free version only allow for 10 questions, far fewer than the number of topics I needed, so I upgraded to a paid one. While it wasn’t exactly cheap, it didn’t break my bank account either ($25/month per semester seemed within reason). Unfortunately, the screenshots I made last year disappeared (probably while cleaning up my folder structure—a learning moment for making actual use of my backup system!), so please be patient with an evaluation using screenshots from an outdated survey as well as from my imperfect memory.

The webform for creating the survey had some flexibility but trying to create a ratings system using option buttons lacked the precision I would like it to have and it also took a while build out. Although the individual evaluation areas (using a scale rating) were tedious to build, I was able to easily adapt the topics from that pdf form. Editing the different sections was simple. Adding, deleting, and moving areas worked as you would expect. The form was also browser based, so I didn’t have to worry about computer compatibility. I could also download all the responses to an Excel (or csv) file. The problem was, that a couple of students swore up and down that they submitted responses but nothing showed up on the backend. This was important to confirm because their responses count as part of their grades. But SurveyMonkey does not provide an email confirmation to say if their information was successfully submitted. Students also complained about not being able to edit their responses: I set it up so that there was one generic form for projects within a class for a particular assignment; part of that form required them to fill in their own name, project title, and class number in addition to evaluating the project. This type of setup required then that the student take the same “survey” 5 or 6 times (depending on the number of groups). Although this makes sense for a one-off actual survey, it really was tedious and error prone.

Here are a few screenshots that should illustrate that although it is a functionally usable product, it really is simplistic in design and usability (but it does get the job done):

  • The main dashboard, My Surveys:


  • Sample Empty Survey form (page 1 and 2):

 Sample Survey Page 1                   Sample Survey Page 2

  • The general editing of a survey:

Editing a Survey

  • Editing a specific question’s options (I broke this scrollable window into multiple clips because my screen capture software (Snagit) would not capture a sub/child-scrollable window):

Editing a Question 1 Editing a Question 2 Editing a Question 3 Editing a Question's Type





Adobe FormsCentral (and a lot of Microsoft Excel and Word)

For the next project, I tried Adobe’s FormsCentral. It was much cheaper ($15/month) and I liked this process better (mostly):

  • I could first create a form within Excel and save it as a pdf file.
  • I would then use my own copy of Adobe Acrobat Pro XI to convert the pdf into a form (using form fields in place of the cells I used within Excel).
  • I would then upload this form to FormsCentral where I had further options I could use on this form. For instance, I could insert a submission button within the form itself, so that all the student needed was Adobe Acrobat Reader installed and a web connection to upload the form upon pressing the Submit button. I set it to also email a confirmation back to the student which I customized to also include their own responses to the form (that is, they had a copy of their data as proof of submission in case something went wrong).
  • Distributing the form was also easy. Although I could give students an url, I decided to download and post the newly create form from FormsCentral to the class’ Moodle page.
  • Like SurveyMonkey, I could download the results as an Excel (or csv) file. This was the part for which I really needed automation! I could sort and separate out this information into individual sheets for each group.

The one thing I found frustrating  was that if I needed to change the form, I couldn’t edit it online because I created it locally. If I had created the form online, I could—but the online creator wasn’t as robust as it was to arrange the form using Excel first). This usually meant going back to Excel and begin the creation process again, though in some instances, it only meant adjusting a form field property (for instance, “required”) and then re-saving that form and re-uploading it to FormsCentral. One potential show-stopper for this product is that it is platform dependent. That is, if the student used a Windows PC with Adobe Acrobat Reader installed, the process worked fine. But that requires that students knows if they are using Acrobat on their computer, which you cannot take for granted. In some cases, they were unknowingly using other PDF software which broke all of the form field features of my Adobe document. But that was a simple problem to solve; just download and install Adobe Acrobat Reader. However, this was a different story for students who used an Apple computer. FormsCentral’s online help about that issue was only so helpful… Most students were able to find a work around which I then distributed to the rest of the class, but sometimes even those solutions inexplicably would not work. For those cases, the students would email me their pdf file and I would manually copy the information over to my spreadsheet.

Although I like the features of FormsCentral more than SurveyMonkey, the cross platform compatibility of SurveyMonkey  mitigates a large problem, making the idea of a browser solution more appealing than a pdf-bound one. Again, the programmer in me wants to develop my own survey application. I think building one-off webforms (that is, a particular survey page for a particular project) should be fairly easy, but like anything code related, it requires time and testing to ensure you won’t have problems that would get in the way of class time. In the end, it seems that the out-of-the-box solutions are quicker and much easier (involving only a few workarounds) but are still limited (well, from this one sampling–I did look at other solutions such as email-based forms, but those were also lacking).

Note:  my apologies for the lack of FormsCentral screenshots from the actual site; however, their site does provide good documentation and the interface is fairly intuitive.

Sample of the Workflow

Here is an example of a Project evaluation form students would submit: SAMPLE PROJECT Evaluation Form

Here’s an example of what the raw data (well, with formatted titles and column sizes) in a worksheet looked like once I imported the results from FormsCentral:

Partial Capture of Compiled Data


Here is a screenshot of the worksheets tabs within an Excel workbook I used to keep track of and create reports for a class’ particular project:

Sample Worksheet listing within a Workbook

I used the corresponding worksheets to

  1. compile the FormsCentral submitted data,
  2. create a template I would copy for each group to later paste a particular group’s results from the compiled worksheet (normally, it would contain the students’ names, but here are removed),
  3. use just as a quick reference after all grades were turned in.
  4. compile final individual grades/responses for all the groups (worksheet name based on Group name),
  5. clean up the previous worksheet (worksheet named with “(2)”) by first sorting the responses differently for each project in order to hide the identity of comment creators, afterwards removing those names all together)
  6. to create a pdf report as feedback to the students. This report was created by copying the different sections of the “(2)” version worksheet into a Word document, and then formatting it into something readable. It would then be saved as a PDF file and uploaded to Moodle:

In order to help make a little more sense of what I’m doing with all of these worksheets, I’ve included an actual workbook (removing student names, of course):

Sample Workbook used for Project Evaluations and Reports

It should be noted that in some of the worksheets, you will occasionally see two types of highlighted rows. They were for accounting purposes. The yellow ones indicate people who never turned in evaluations for that group’s particular project and so would receive no points for that particular evaluation. The off-green highlighted rows indicate that the individual in question was a member of the group for which the current worksheet was about–they were presenting their own project, so no evaluations from them were needed.

Here are samples of the Reports I created from the worksheets that students would ultimately see. The Word document was created first from the worksheets marked with a “(2)” on their tabs. I added this intervening step because formatting in Word gives me much more fine precision than I would otherwise have in Excel. The PDF file is just the same Word document; PDF files remove any compatibility issues I might otherwise have (these were what students saw within the grading section of their assignment pages in Moodle ):

Sample Project Report (Word version)

Sample Project Report (PDF, student version)

Some Thoughts about my Project Grading Efforts

No matter the technology used, the topics themselves as well as ranking scales are ultimately what’s important in terms of student evaluations. I was embarrassed by my earlier attempts and so engaged the students themselves to evaluate their evaluation forms. Again, pedagogically, I thought it would be a useful exercise. In practice, most student feedback was not very considered. However, they did point out a few inconsistencies.

In the end, the 0ne thing I was trying to avoid was creating specific evaluation forms for specific projects–that is, I wanted to use one form for all projects. I really wanted the lower maintenance of a generic form, but after scouring the web for examples of such forms, what I did find was just as problematic as my own. Not only are grade descriptions too generic there just weren’t that many samples available, which is why I made it a point to include my own in this posting–not that I think they are perfect (far from it) but they might help serve someone else as a starting point.

I’m also wondering if it might be more useful in future courses to decrease the point value range (and therefore the grade descriptions) from 6 to 3. I do like the idea of having more nuanced evaluations but a simplified system might lead more to bottom and top heavy grades as opposed to a middle range. One of the things I’ve noticed when it came to students evaluating each others work, was that some students always gave high, positive feedback. There were also some students who consistently gave low, harsh feedback–either out of laziness or vindictiveness (I’ve heard a remark on more than one occasion that went something like, “So and so’s a know-it-all and I can’t stand her!”)

Grading the Grader

Typically, most students began workshop comments as well as written comments at a very generic level and with little support.

At first, I would give specific in-text feedback to their form (see the first posting of this series):

in-evaluation commentswhich would be followed by some general reminder about detail:

generic comments

On that first round, I didn’t count off; they received full points. After all, they were learning about details and I was trying to be positive.

But being this detailed for 35 sets of evaluations for each group (4-6) proved to be overwhelming. In the end, I opted for a simplified approach, hoping to illustrate by using my own comments on their work as an example of the details they needed to be forming within their evaluations. I created a Word document with my comments for each person, which I would then copy/paste into Moodle‘s grading form. I would then use this file as the template for the next project, making my notes underneath the previous comments. This way, I could easily tell if the students were making use of my feedback or just plain ignoring it, thus, helping me write the new set of comments:

combined comments


On the next project, I would remind them of comments and suggestions I made on their first set of evaluations and take off a few points. I would do this until later, they would receive very little points.

comments improvedBut even with my streamed down style, providing this kind of feedback is important. I would even say it was one of more important learning moments for students. If you read through the sample Report pdf above, you’ll get a better idea for the range of comments. I freely admit that this is an area I need to spend more time experimenting with but even so, I would never cut this part out of the class.

Grading their Peers

Originally, I had students writing a one to two page essay evaluating both their peers’ as well as their own contributions to their projects. However, given that students had a lot to work on for class at any given moment, and to help me cut down on the amount of grading, I decided to streamline this into a form, though free-form text one. I looked around quite a while until I stumbled upon this Collaborative Project Evaluation Form on Purdue’s website. I altered it only by removing the form fields. Like project evaluations, efforts were mixed and required coaching throughout the semester. Though this student tended to give full marks every time, it is still an example of a fairly thorough evaluation: peer evaluation sample


As this post is already well-beyond a comfortable reading length, I’ll just end with a graphic illustrating my folder organization as far as projects are concerned. Of course, even for non-collaborative classes, file management ought to be a concern; however, I hope this image demonstrates that with many moving parts task-wise, in terms of actual file management, you’ll need more than the one folder per class method I so often see other teachers using:

Folder Organization

Next: Pt 4, A Sample Project

Groups and Group Work, Notes from the Collaborative Classroom pt 2

Groups and Group Work

Group work is difficult no matter the technology or students. On the first day of class, I spend the bulk of the time explaining how and why the class will operate collaboratively for the entire semester. Each semester, there is usually one or two students who decide to drop the class because of this. I freely admit to my classes that in high school, or as an undergraduate in college, I did not enjoy working in groups–mainly, because there was always at least one person who would not participate for one reason or another, thereby making the rest of the group do all the work. Or worse still, I sometimes ended up doing the entire assignment myself with everyone else receiving equal credit. There always seems to be students who, no matter what, won’t participate. And that is a problem with group work.


In an effort to minimize this tendency (lack of participation), I try to push the issue back onto the groups, giving them the authority to “vote a member off the island;” that is, deny a particular group member points on a project for which they did not actually do anything (or only minor things). And the groups’ decision is final; I cannot overrule them. However, before they enact such measures, I do have a few rules in place, such as notifying me about the problem well ahead of time, so that I can give them feedback on how they might resolve such issues. Groups have used this option only a couple of times—the students in question mostly did not object or appeal because they knew they had contributed little or nothing at all. In that case, the “big stick” of grading had very little effect. However, I have noticed that from their evaluations of each others’ group participation, like professors, students have a difficult time, even when upset about the lack of effort from some members, of penalizing a classmate. The typical reason was that they didn’t want to be the “bad guy.”


It may not be obvious, but assigning groups is not as simple as it sounds.

The first few times that I assigned group projects, I let the students decide among themselves in which groups they wanted to be, believing that by being democratic, the students would feel empowered and therefore would be more engaged. I was wrong. Although there were some people who did gain a measure of inspiration by being self-directed, most students’ selection was based on friendships, race, popularity, attractiveness, or just whoever hadn’t yet found a place within a group. This would result in some bonding but more often created a sense of exclusion among members–before projects even started.

To avoid this, I started assigning people to groups myself. At first, I tried the counting off or drawing names methods, but to do this strictly by hand was time-consuming, and really not as random as I would like (for example, seat placement and attendance issues would bias the randomness). So instead, I used Moodle’s Random function within group creation (you can create groups based on on the total number of groups desired or on the number of members desired per group). Since I was often still grading projects after assigning new projects, this meant that within Moodle, people were assigned to multiple groups which also meant I had to use different naming conventions to keep from confusing myself as well as my students. But Moodle’s automated group creation only allows for numeric or alphabetic naming conventions (“Group 01” or “Group A”)–which made it difficult to keep track of when we got beyond the first two projects. I could have recycled those groups by deleting them and then recreating them but I like having the history.

Here’s a snapshot of the number of groups listed towards the end of the semester:

Groupings Overview


The problem with Groups within Moodle is that it creates them at the class level rather than at the assignment level. It would be handy, if on the assignment creation page, there were an option to create groups on that level. This would alleviate the need to use different naming conventions; that is, I could just stick with “Group 01”, etc. But the different conventions proved an adequate workaround.

So instead of recycling (deleting and recreating) the group names, I began using Moodle’s random group creation as a starting point only, so as to get a rough draft assignment for everyone. I would then copy and paste those groups and members into a spreadsheet and then manually adjust the assignments. I then went back to Moodle to manually create the groups using such group naming conventions as animal names or comic book titles (Alpacas, The Defenders, etc.), making the assignments based on my spreadsheet. It took time to ensure that group members hadn’t previously worked together (though this was almost impossible by the last couple of projects). This mixed solution had the added benefit of helping me sometimes find “better” fits for some group members whom I thought would benefit more by working with other students in particular.

Overall, I thought working with new people mimicked how real world jobs work. We typically don’t get to choose who our team members are and so learning how to navigate such group politics and issues while working towards a common goal is invaluable. Also, it continually helped teach the valuing of assessing people’s skills and knowledge at the outset and applying that information to the project at hand. And perhaps more importantly in terms of college, it helped connect people who otherwise may have never had the opportunity of meeting one another. One of the most rewarding things about such a class is receiving an email from a former student who happens to mention still being in contact with former group-mates.


So what makes a good group size? I’ve tried from 2 to 6 members early on. 2 member groups are too small–if one person was sick or decided to drop the course or just plain wasn’t working, that meant that the other person had to bear the responsibility for the entire project alone. 5 and 6 member groups are too large–mainly from a logistic perspective; that is, although I made some class time available for the groups to work on their projects, the majority of the work was done outside the classroom; accomodating that many different class and work schedules can be very difficult. I found that 3-4 people sized groups is the sweet spot (though depending on the project, even 4 can be a little too much to divide the  responsibilities). This allowed groups to recover from for the absence of a member, added the brainstorming power of divergent perspectives, and perhaps more importantly, an provided the opportunity for an equitable division of labor.


As I mentioned, most group work was done outside of the classroom. To get groups to thinking about potential challenges, we discussed the pros and cons of face-to-face meetings as well as synchronous and asynchronous technological tools (email, texting, video conferencing, phone calls, etc). Some groups always met face-to-face (like at the library), and some groups tried using only email. The most seemingly productive groups tended to use a combination of face-to-face meetings and distance communication tools. These groups typically began with a face-t0-face meeting, brainstorming on a project goal and developing a temporary plan and schedule. They would use this plan to set up dates/times and methods of communication. I also I created Moodle forums for each group that were only visible to their members. However, almost no groups ever used them (and even then, only initially), though this did vary from semester to semester.


For assignments, I required that in addition to one hard-copy being turned in, each group member uploaded digital files to Moodle.  They had to turn in the same copy; I had each member turn in a separate copy for accounting reasons. Although Moodle lets you filter the class by groups on its grading page, I want to be sure I was working with the correct people–and, it also was another way to encourage member participation by making their accountability very visible (if they were not doing the work, it would be highly unlikely that the other members would share their files).

About those “Digital files”:  since Microsoft Office was freely available on campus computers in the library and various labs, and since I also have my own copy of that produce, I stipulated that the file formats had to be in one of the Office file formats (such as .docx) or .rtf (rich text format). But I did not want to limit students to a particular software package, so I also allowed for any program that could save or print to a .pdf file. And in the case where some projects were more of a physical nature (the Packaging project, for example), they could take digital pictures and upload those to Moodle. Typically, most projects were created digitally, but I also felt it was helpful to see hard copy manifestations of it (again, such as the Packaging Project).  This also meant that I did not teach any particular program, although I would demonstrate useful tips and tricks within Word or other tools. I posted links to different applications as I came across them on the web (such as I also encouraged students to share their discoveries with the class as well. One such application that a group used one semester was–a Photoshop-like web application. Although I did not teach any particular application, students were free to ask me for help and we would learn together. I enjoyed those moments because it demonstrated that you don’t have to be a “guru” to learn a new tool–just a willingness to play.

One issue where technology was a problem for the groups was with operating system/software compatibility. Usually, this was in the form of some members using Apple laptops while others used Windows. Most of the time, they could save to one another’s format (i.e. Office for Mac) but even then, there were minor changes in formatting that would occur. This was a problem usually for the presentation part of the project (I had a projector but no Mac adapter)–and those formatting issues could really hurt evaluations (so we had to be sure to acknowledge those issues during the presentations).

In the next post, I’ll discuss the evaluation (grading) and mechanics of the projects:  Pt 3.


Introduction, Notes from the Collaborative Classroom pt 1

This posting is broken out into sub-postings for sake of readability:

  • Intro
  • Group Work
  • Workflow and Mechanics of Evaluations
  • Projects


I’ve slowly been expanding collaborative learning within my courses, from testing out group essays within Freshman Composition classes, to group projects within Early American Literature classes which included projects (and presentations) on specific short stories as well as creating a video essay on larger topics such as, what is American Literature. In addition to my own earlier attempts, my interest in group projects was further fueled after having read discussions on collaborative learning and general education reform within the HASTAC community forums. While I was at first skeptical of contract grading, the concept was intriguing and seemed like it could possibly fit in with my previous class group work. So I decided to give the idea the proverbial “old college try” and last year, conducted a larger classroom experiment by building the entire semester around collaboration–not just for students, but for myself as well.

Teaching Style

Being able to integrate your teaching style/philosophy into whichever pedagogical camp subscribe to is of primary importance. My own style requires less straight lecturing and more student interaction. You might call it a cousin to the throw-students-into-the lake-and-let-them-swim approach. That is, I feel students learn best by struggling with the material themselves more than by lecturing alone. Of course, I do lecture, but generally by way of asking questions and playing devil’s advocate in order to get a discussion going regarding the day’s particular topic, slowly getting students to develop themselves the connections between the material.

It seems to me that no matter the subject, the courses are really about critical thinking skills, that is, slowing down all the information or processes we take for granted, and holding them up to a magnifying glass. And while I cannot physically pull and poke students’ thoughts, I am able to ask questions during these struggles that help students discover a direction in which to take them.

In terms of collaborative learning (well, any learning) the trick always seems to rest with the How of incorporating these questions into assignments and projects that can also be evaluated in a way that is useful to the student (beyond a numbered ranking) as well as to my own skills.

There is much more to collaborative work within a particular class than just having students work in groups. This posting is about sharing last year’s experiences within my Technical Writing courses in regard to the class structure itself as well as the group work.

Whereof what’s past

The first time I taught a Technical Writing class was back in 2009 and began somewhat awkwardly. I had at first tried following chapter by chapter the assigned textbook, which was more centered on business writing than that of the larger field of technical writing—that is, they were more focused on particular formats and elements in sort of a recipe approach rather than larger discussions on why or how the ingredients worked alone and with others, which only encouraged students to throw in such ingredients without much consideration for why they used them (“It looked nice” was the typical justification). I also made the mistake of trying to teach HTML in order for students to create their own websites. We spent a week at the beginning of the semester going over basics, and then they were to turn in a first version of their websites for their midterms, then revise them for the Final, incorporating principles they learned during the semester. In the end, many students “cheated” by using pre-made templates which removed all of the design decisions they were supposed to be focusing on for that project. While I believe learning HTML basics is valuable for other courses, there are so many tools and platforms available for document design projects, such as WordPress, that learning HTML only gets in the way for this particular course. If this were a Digital Writing course, for example, starting off with coding their own websites would be invaluable before moving into other platforms in that students would learn what those other platforms would be “helping” them with as well as gaining insight on how they might customize such tools.

One thing that worked really well was the workshop format of the class in that it fostered critical thinking skills from both the individual as well as the group aspects. Groups organized a very short presentation of their projects (three minutes) focused not so much on the content goal of a particular subject, but rather, on the details of the project itself, such as why the student chose one font over another. Or the size of it. Or color. Or placement. The presentations were to focus on the choices the groups made with their particular audience in mind as well as the goal of the document. After the presentations, the class would then workshop the project, while the particular creators took notes,  remained silent until after we were finished with the discussion (here, my poetry workshop roots show). At the beginning of the semester, I usually would start the discussions in order to demonstrate the sorts of details and issues I was trying to get them to focus on—as well as to steer them away from comments like, “Oh, I like the colors she used” and instead concentrate their comments on why the element in question was effective or not (again, in terms of the project’s stated audience and goal of its document). After only a few classes, the classes began leading the discussions.

As with all classes, I’m always discovering areas that need improvement. What follows are notes and examples about what I changed or added as well as issues I came across that I hadn’t before considered. In the next post, I discuss group work.  pt 2

More 205 Early American Lit video essays: “Allusions”

This group’s video essay from last spring’s 205 Early American Lit class shows that even with literary topics, it’s still possible to have fun: “Allusions”

From the group:

“Hearts racing, pulses pounding, adrenaline rushing….One of the biggest documentaries of the year, a visually stunning epic film to see, ‘Allusion’ will transform your very thoughts of American Literature. Watch the phenomenal acting of Robert, Brooke and Amy as they take you down the breathtaking road to answer to life’s burning question, “What is an allusion?” These ‘very professional’ interviews were conducted locally, in Lafayette’s retail stores, UL campus and personal homes! Now we will get to find the answer of this lifelong question!!”

Like the group’s video from my previous post, this group also use their different questions as section headers to transition/introduce the different segments. Where they differ from the other video essays, however, is that they decided to capitalize on their own personalities/voices. Tone is something teachers spend a lot of time on in freshman comp courses. That is, most students (and even some writers) believe that the formal structure of an essay requires a stuffy or boring tone (and consequently an elevated language). But this group’s use of a humorous, conversational style/tone still manages to convey a thoughtful and well-organized discussion of their initial question (What is an allusion?). Again, I can imagine using this as an exercise for a freshman writing course to show how all the elements work together to create a tone. For instance, part of what made their use of humor work so well for them was also due to their decision to keep in the interviewer’s voices–what would happen if they re-cut the video removing those voices? How might that affect the overall cohesion in terms of tone?

Although a number of groups made use of mixing in other videos, this group used them as direct examples to support their evolving definitions (as well as for comic effect). One thing I’ve noticed in this as well as the other videos, is that the students really do have a solid idea of how different cutting works–from the the fade out/in, to the jump. I can’t help but believe that in terms of the writing process, pointing out such moves to students might help them see those same transitional moves within their writing.

One of my other favorite things about this video is how it shows how a seemingly simple question like theirs can open up still more interesting questions.

Another Class Video Final: “What Makes an American?”

Here is another 205 Early American Lit class video essay:  “What Makes an American?”

From the group:

“The United States of America was founded on a number of principles derived from a group of men now called the ‘Framers’ or ‘Founding Fathers’. The documents they authored set up a new nation based on their beliefs. Those same beliefs are engrained into the society that functions within that framework. So, after 200 years, have the American people stayed true the founder’s beliefs?”

“We interviewed 6 people on the University of Louisiana at Lafayette campus and at Caffe Cottage in Lafayette.”

This group chose an interesting middle ground between how the first two group’s videos made use of cuts between the interviewees. In the previous post, I was wondering about how creating and editing videos might inform students rhetorical strategies within their own writing. It seems as if this group went into making their video already with a written essay in mind, using it to structure their video.

There is a voice-over introduction that opens with a terrific question that was greatly influenced by our classroom discussions of different early American writing: “Have we become the culture the founding fathers envisioned?” Rather than focusing on one interviewee at a time, or switching back and forth between them, they used “a few related questions” to organize the rest of the video. Although there is a lot of strong visual rhetoric taking place in the introduction and conclusion, they went a different route to divide up the interviews themselves.

They used just plain white text of the questions against a black background, creating a sense of section headers, before displaying the responses to them. The responses were also “punctuated” by the group’s clever first question which asked the people for three words to describe what to them was an American.  Rather than using again the white text on a black background when they dug deeper by asking the interviewees why they chose those particular words, they instead focused on one word at a time per interviewee, displaying the word at the bottom of the video within a transparent, graduated section/block. This served to keep the word in the viewers’ mind all during the interviewee’s response. It also helped to contrast the different as well as similar responses from the other interviewees.

The group then followed up by asking how the interviewees thought the founding fathers might have described an American. They again displayed the question using white text on a black screen which created the sense of a different section or paragraph. The effect of contrasting their own current opinions against their own perceptions of the founding fathers’ opinions is telling and could lead to even more interesting questions.

Like the concluding paragraph of an essay, this video, in the form of a voice-over, provides a concise, thoughtful conclusion while using the image of the American flag to bookend its use in the beginning, creating a great sense of cohesion.

UPDATE: The next video…

Another Early American class video, “What is Literature?”

As promised, here’s another video from my English 205, Early American Literature titled “What is Literature?”:

From the group:

“We focused our project around a central theme, “What is American literature?” we interviewed a diverse group of individuals ranging from an anthropology instructor and orientation director to college students our own age. We also asked our interviewees what they considered to be literature and if they thought it had changed over the past 20 years.”

They managed to capture a broad range of ideas. From a technical standpoint, what I like about placing this video as the second one in my postings is how their use of cutting back and forth between the interviewees contrasts so nicely with the first group’s video, “Uhhmerikuh”. These clips are much longer than the rapid ones from the first video. It’s not that one is better than the other, but they create an entirely different feel to the video as a whole. It’s interesting how much the students really know about this concept as well as their effectively employing it as a rhetorical strategy for the entire video. It’s also interesting to see/hear how including the questions also changes the feel to the interview.

It would be interesting to  teach a 101 freshman comp course using video essays at first to then try to translate their strategies to a written essay. I wonder if the visualization of videos, something which they know a great deal about because they have been watching movies their whole lives, would help the students to better understand the rhetorical moves within their own writing and how they might use similar strategies.

UPDATE: The next video…

edX reveals costs–er, revenue–oh, wait…

EdX is back in the news, so my outdated posting about their initial announcement still seems relevant which always makes me happy. This article  from the Chronicle of Higher Education website discusses how EdX will generate revenue for itself and the University–sort of. It sounds more like a Field of Dreams plan:  “If we build it, they will come. And then we will figure out how to make money.” That is, the EdX documents show how revenue could possibly be divided between universities and EdX according to two different “partnership models” (pricing structures),  but doesn’t quite get at from where that pool of revenue will come. The Chronicle does a great job of keeping that question “But How?” relevant.

I’m still waiting to see details on how these course will differ from other online courses. The article also mentions a pilot edX MOOC, “Circuits & Electronics” course, offered last Fall at San Jose State University, where “…60 percent of students passed the San Jose State course; 91 percent passed the edX-infused version.” While that sounds really encouraging, we don’t know what the basis of the comparison is–that is, did the San Jose State course and edX one have the exact same assignments and tests? And if so, were they graded by the same people? I’m guessing that the edX course used online tests. How did they account for ensuring that the people who took the exam were the same people enrolled in the course? Although the article says, “EdX has a deal with Pearson VUE…to hold proctored examinations for its MOOCs,”  it’s not clear from the article whether such proctoring services were used in this pilot course.

Don’t get me wrong. I am THRILLED by the possibilities that online courses hold. But I’m tired of all the hand waving and buzz words by different companies and universities. Even in my own university, there is a tendency to use Corporate-Speak when discussing distance learning. I’m guessing that besides the frontierness of such endeavors, the differing colleges must be mindful of the political climate where in budgets are debated in terms of bite-sized election ads, where for the political public, perception is more the focus than reality of circumstances.

a followup to an ealier post on Collaborative Learning and Teaching…

Finally, the follow up on my post from last March. (school really keeps me busy)

It’s going on 9 months since the students from my Early American Literature course completed their video essays on American literature for the class Final. To recap the project:

Given their own resistance to things new, I also decided to help them be creative for the Final. Instead of a written exam or essay, they are going to make a 10-15 minute video, interviewing people. That is, they are treating this as an experiment based on a question they have developed from our readings. I gave them a base question that I myself am interested in:  what makes American literature American? But they could also develop their own. For instance, one group is thinking about exploring the idea that, given we believe our country was founded on religious freedom (they could pull ideas from Winthrop, the Declaration, etc…),  are we really free to believe as much as we think we are? I think given the nature of the current Republican debates, it’s a great question. Some have already prepared a proposal and met with me to make sure they were clear on what the project was about as well as help them narrow down their question and possible methods for exploring it.  They are actually becoming excited (whereas at first, they let out a collective groan…).

On the day of the Final, each group introduced their video, trying to summarize it in the way that might appear on the back of the dvd case. The mood was palpably one of excitement–much like graduation. Not only did they surprise me with their different creative approaches, they surprised themselves even more so. It was one of my most favorite class moments ever.

I liked the results so much, that for my Technical Writing course this semester, I’m focusing the entire semester on collaborative work presented in a workshop style setting. We’re concentrating on the creative process and design choices. But I’ll (hopefully!) write more on their work later.

I tried uploading one of the videos (250MB) to this site, but ran into file size issues. Since all the students signed forms giving me permission to use their work for educational purposes, I’m trying to see how this will work if I upload it to YouTube first and link off of that in this post… So go ahead and give them a view–they did a fantastic job of cutting back and forth between the interviewees:  (be a little patient, they have a fun opening but it takes them a little over a minute to get to the interviews)

UPDATE: The next video…