We wrote at the beginning of the month about “Robo-Readers”, computer programs that can “read” and grade compositions and reports. Then, on April 11th, a study was released that concluded that these programs are as capable of scoring standardized test essays as humans. Is it time for teachers to take a backseat and let the robots take over?
The study was conducted by Mark Shermis, at the College of Education at the University of Akron. Shermis used nine different automated grading systems to score more than 16,000 essays from middle and high school students. The scores given by the computers were as accurate as those given by humans – and in some cases more so.
And it’s not just about accuracy. One of the programs, e-Rater, which was developed by the Educational Testing Service, grades an astounding 16,000 essays in 20 seconds.
However, teachers might still have a few years before they need to start scanning the want ads. While robo-readers might make a decent replacement for humans when the writers don’t know they’ll be graded by a computer, a director of writing at M.I.T. has found it easy to beat the system. Using words like “however” and “moreover” will up the score, long essays will get higher marks and the programs can’t distinguish fact from fiction.
At least not yet.
For more information on this topic, check out these articles:
Or read the study for yourself.
Image by NASARobonaut