Should We Be Worried About Robo Graders?

Should We Be Worried About Robo Graders?

Attention students: coming soon to a college or university near you – the robo grader. No, it is not going to replace your instructor but it is a tool that is being considered for use by educators as a means of making their job more efficient and should I say, easier? I understand the challenge of providing substantive feedback for hundreds of students’ papers within a short time period and I know of many educators who continually struggle to meet the required feedback deadlines.

It would be easy to suggest that instructors need a better time management plan; however, those who take the time to thoroughly read and review students’ papers know that even with the best plan in place, there are weeks when it is still a race against time. Now there are automated grading services available and educators are divided in their opinion of its effectiveness. Let’s explore these arguments and consider whether students should be worried about or embrace this new technological service.

What are Robo Graders?

Though automated grading systems are not new, recent advancements in technology have changed the very nature of how they function. One of the latest robo graders, SAGrader, was developed by a sociology professor at the University of Missouri, St. Louis. In the article, Idea Works: Using Automated Grading for Collaborative Learning, it is described as a program that will “blur the line between synchronous and asynchronous collaboration by allowing instructors to create and save assignment-specific grading standards in the program.”

What this statement means is that instructors create a rubric or feedback form with criteria for the assignment and then the robo grader utilizes “a blend of linguistic, statistical, and artificial intelligence approaches” to develop a score. It will also indicate any missed criteria when it sends a report. This program has been designed for essays ranging from short answers to multiple-pages. As reported by Inside Higher Ed, automated graders have been primarily used now for standardized tests and placement examinations. Of course with the newest innovations such as SAGrader, it is being discussed by educators and considered as a possible teaching tool in numerous higher education settings – including community colleges.

The Case for Robo Graders

Most of the recent support for automated graders has focused on a study conducted by researchers at the University of Akron, Ohio. The study was funded by the William and Flora Hewlett Foundation, which has been offering grants since 1967 to “solve social and environmental problems.” The foundation has an education program with a commitment to invest in “organizations that develop and advocate for innovation in ideas, practices, and tools, as well as those that participate in the public policy debate on these issues.” In the study, nine automated graders were used to assess 22,000 short essays, written by students in junior high and high school. These same essays were also given to human readers. The results found that these robo graders “assigned the same scores as human graders.”

In the full study report, Contrasting State-of-the-Art Automated Scoring of Essays: Analysis, it was concluded that “the results demonstrated that overall, automated essay scoring was capable of producing scores similar to human scores for extended-response writing items with equal performance for both source-based and traditional writing genre.” Because of the result of this study, many educators now believe that there is practical application for robo graders in higher education because these programs “aren’t meant to read for artistic merit but, rather, for how well a writer communicates basic ideas.” To further support the use of automated graders, “educators say up to 75% of students in America fell short on a recent national writing assessment – and some suggest that the computers are more consistent in their grading, while human graders sometimes are inconsistent.” This sounds promising; however, there are opponents to these programs.

The Case Against Robo Graders

In the article, Are Robo-Graders the Answer to Student Writing Problems?, the limitations of this technology are discussed and the point was made that the “software currently on the market can not differentiate between a coherently written essay and a “˜nonsensical’ jumble of clauses that are relevant to the topic, but don’t make any sense together and furthermore, a computer can’t successfully cope with formats other than straight prose.” As an educator, I agree that this is a valid point. It is applicable for other forms of automated educational software that I work with now. For example, I have access to an automated service that inserts comments about grammar and spelling. As I review the report, which I share with students, I note that the program is unable to understand the context of what was written – it processes information from a mechanical perspective.

One of the most vocal opponents of robo graders is Les Perelman, the director of the Writing Across the Curriculum Program at MIT. Perelman states that “automated essay graders are incapable of measuring anything but superficial elements of an essay “” and they do a bad job of that, too.” He believes that only a teacher can “assess the nuance, structure and content of student writing” and if automated graders are used it will force teachers to “dumb-down” the assignment instructions.

The issue of replacing instructors’ feedback with a software program is more complex because it becomes more than a matter of being able to determine if assignment criteria were met. Evaluating a paper involves establishing the truth or validity of the essay, which is difficult to complete through the use of technology because the “legitimacy or quality of a product is subjective by nature.” As I consider how I grade papers I find it difficult to imagine a program being able to replicate the process used. For example, I don’t want students to only write well and offer their opinions – I expect that their work will include a well-developed thesis, supported with credible, academic sources. I could utilize a robo grader to make a determination of the structure of the paper and the number of sources used – not the overall quality of the essay development, critical analysis, and research.

Embrace Technology or Avoid It?

The William and Flora Hewlett Foundation sponsored an essay grading competition called the Automated Student Assessment Prize, or ASAP. The winners were just announced this week: Jason Tigg (United Kingdom), Stefan HeníŸ (Germany), and Momchil Georgiev (United States). Their programs used a “combination of predictive analytic strategies to drive their software, and not just natural language processing.” The software is based upon computer science and an interaction between “human and computer” language and predictive analytics attempts to predict future patterns. This could help to make a shift from objective to subjective automated grading.

In all of the current articles about this topic there has been little reaction discussed from the students’ perspective. The belief is that if instructors can return papers with feedback quicker to students, they will be able to take corrective action before the next assignment is due. Another possible suggestion is to allow students to have access to the robo grader prior to submitting their assignment for grading, so they can receive guidance and make necessary enhancements or changes.

Based upon the interest in this software program it would not be surprising to see it tested in colleges or universities. It will be interesting to hear from students and how they react, so please share your comments. Do you believe this is a valuable tool for instructors? Would you utilize it if the program was made available for you?

You can follow Dr. Bruce A. Johnson on Twitter @DrBruceJ and Google+.

Photo © Images.com/Corbis

Facebook Comments