I keep reading about how technology is REVOLUTIONIZING (all caps intentional to indicate the fervor with which this sentiment is expressed) education, particularly at the college level. Harvard, Stanford, MIT: top universities are offering courses in statistics and computer programming, drawing upwards of 350,000 enrolled students across just three classes in some cases. As I mentioned in an earlier blog post, John Boyer teaches over 2500 students in his intro geography class at Virginia Tech through strategic use of online videos, Facebook and Twitter.
So should I be able to achieve a more modest goal of teaching 500 students in Freshman Composition? Plenty of instructors already teach upwards of 150 (5 sections x 30 kids in each), even though much research suggests that first-year writing classes should enroll less than 20 students and faculty members should teach no more than 60 writing students per semester. What’s 350 more when you’re already not sleeping and students are trying to graduate in four years?
Sarcasm aside, I really do wonder how many students I can take on with the help of Web 2.0….multiple-choice quizzes, automatically graded by the course management system…students upload drafts and complete peer reviews just like they would in a small class…podcasts going through class readings…Twitter for questions.
There’s still the small matter of the composition part of Freshman Composition, typically four essays over 16 weeks. Students like to have a grade and feedback on their first papers before they submit the second, so that means maybe three weeks to get through 500 essays. Plus, I never teach just one class, and there are other job duties; grading these essays can’t take up all my time. If I spend 2 minutes on each paper, I can get through in just under 17 hours.
I like to devote more than 2 minutes to an essay that a student has worked on for weeks.
Can technology save the day? According to an article by Julia Lawrence (2012),
two professors from the College of Education at the University of Akron, Ohio, Morgan and Mark Shermis, decided to put several essay-grading software packages available on the market to a rigorous test, by having them grade 16,000 essays that been previously assigned grades by teachers. The results, announced during this year’s National Council on Measurement in Education meeting held in Vancouver, Canada, showed that at least some of the programs produced marks very similar to the ones given by humans.
What was taken into consideration?
Now, I don’t know about other English instructors, but I really don’t grade on vocabulary per se, while grammar / mechanics earn anywhere from 10 to 20% of my essay grading rubrics, depending on the assignment. That leaves 80-90% ungraded by these amazing software programs: thesis, supporting evidence, organization and transitions, appropriate citation.
I’m sure someone, somewhere has developed a system that can locate a thesis and check for appropriate content, based on keyword and synonym frequency. I’m sure that software can scan a paper for proper MLA or APA or Chicago citations, and plenty of products already exist to check for plagiarism. I’m not sure how I would feel about the graded product.
In this article, a high school teacher explains that he has been able to “grade” 25 writing assignments from each of his 120 students–that’s 3000 total–using an essay-scoring program. With the increasing emphasis on writing in the Common State Standards, not to mention the new STAAR exams, getting students to write and write and then write some more, all with the aim of producing complete sentences that use at least three examples from the prompt, such software may be very helpful.
For me, with 6-8 page research papers coming in, I have doubts. Many, many doubts.
Lawrence, J. (2012, April 26). Software for automating essay grading put to the test. Retrieved June 5, 2012, from Education News: http://www.educationnews.org/technology/software-for-automating-essay-grading-put-to-the-test/