Handbook of Automated Essay Evaluation: Current Applications and New Directions

Front Cover
Mark D. Shermis, Jill Burstein
Routledge, Jul 18, 2013 - Psychology - 384 pages

This comprehensive, interdisciplinary handbook reviews the latest methods and technologies used in automated essay evaluation (AEE) methods and technologies. Highlights include the latest in the evaluation of performance-based writing assessments and recent advances in the teaching of writing, language testing, cognitive psychology, and computational linguistics. This greatly expanded follow-up to Automated Essay Scoring reflects the numerous advances that have taken place in the field since 2003 including automated essay scoring and diagnostic feedback. Each chapter features a common structure including an introduction and a conclusion. Ideas for diagnostic and evaluative feedback are sprinkled throughout the book.

Highlights of the book’s coverage include:

  • The latest research on automated essay evaluation.
  • Descriptions of the major scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM Engine, c-raterTM, and LightSIDE.
  • Applications of the uses of the technology including a large scale system used in West Virginia.
  • A systematic framework for evaluating research and technological results.
  • Descriptions of AEE methods that can be replicated for languages other than English as seen in the example from China.
  • Chapters from key researchers in the field.

The book opens with an introduction to AEEs and a review of the "best practices" of teaching writing along with tips on the use of automated analysis in the classroom. Next the book highlights the capabilities and applications of several scoring engines including the E-rater®, the Intelligent Essay Assessor, the IntellimetricTM engine, c-raterTM, and LightSIDE. Here readers will find an actual application of the use of an AEE in West Virginia, psychometric issues related to AEEs such as validity, reliability, and scaling, and the use of automated scoring to detect reader drift, grammatical errors, discourse coherence quality, and the impact of human rating on AEEs. A review of the cognitive foundations underlying methods used in AEE is also provided. The book concludes with a comparison of the various AEE systems and speculation about the future of the field in light of current educational policy.

Ideal for educators, professionals, curriculum specialists, and administrators responsible for developing writing programs or distance learning curricula, those who teach using AEE technologies, policy makers, and researchers in education, writing, psychometrics, cognitive psychology, and computational linguistics, this book also serves as a reference for graduate courses on automated essay evaluation taught in education, computer science, language, linguistics, and cognitive psychology.

 

Contents

1 Introduction to Automated Essay Evaluation
1
2 Automated Essay Evaluation and the Teaching of Writing
16
3 English as a Second Language Writing and Automated Essay Evaluation
36
4 The Erater Automated Essay Scoring System
55
5 Implementation and Applications of the Intelligent Essay Assessor
68
6 The IntelliMetricTM Automated Essay Scoring Engine A Review and an Application to Chinese Essay Scoring
89
7 Applications of Automated Essay Evaluation in West Virginia
99
Open Source Machine Learning for Text
124
12 Scaling and Norming for Automated Essay Scoring
199
13 Human Ratings and Automated Essay Evaluation
221
14 Using Automated Scoring to Monitor Reader Performance and Detect Reader Drift in Essay Scoring
233
15 Grammatical Error Detection in Automatic Essay Scoring and Feedback
251
16 Automated Evaluation of Discourse Coherence Quality in Essay Writing
267
17 Automated Sentiment Analysis for Essay Evaluation
281
An Approach to Automated Essay Scoring Motivated by a SocioCognitive Framework for Defining Literacy Skills
298
19 Contrasting StateoftheArt Automated Scoring of Essays
313

Principles and Prospects
136
Developing Warrants for Automated Scoring of Essays
153
11 Validity and Reliability of Automated Essay Scoring
181
The Common Core State Standards and Its Linguistic Challenges and Opportunities
347
Index
355
Copyright

Other editions - View all

Common terms and phrases

About the author (2013)

Mark D. Shermis, Ph.D. is a professor at the University of Akron and the principal investigator of the Hewlett Foundation-funded Automated Scoring Assessment Prize (ASAP) program. He has published extensively on machine scoring and recently co-authored the textbook Classroom Assessment in Action with Francis DiVesta. Shermis is a fellow of the American Psychological Association (Division 5) and the American Educational Research Association.

Jill Burstein, Ph.D. is a managing principal research scientist in Educational Testing Service's Research and Development Division. Her research interests include natural language processing, automated essay scoring and evaluation, educational technology, discourse and sentiment analysis, English language learning, and writing research. She holds 13 patents for natural language processing educational technology applications. Two of her inventions are e-rater®, an automated essay evaluation application, and Language MuseSM, an instructional authoring tool for teachers of English learners.

Bibliographic information