For purposes assessing of our instructional processes it is important to archive grades and feedback to students at the finest possible level, i.e., the problem/question level. For this to work, the person constructing the test/assignment should prepare a matrix showing the relationship of the individual items to target outcomes. For humanly graded items, there should also be a rubric that relates performance on the test to scores and to outcomes.
Our high-speed scanning capacity gives us the capability of generating scantron-like forms for the grading of multiple-choice questions. Human graders can fill-in mark-sense items (boxes or circles) corresponding to rubric-based grading decrement. One way to automate the identification of answer sheets is to give each student a personalized answer sheet that has their name or ID number encoded on it is a machine recognizable form, e.g., bar code and/or a special OCR font. Of course, the time needed to distribute those personalized answer sheets to say 120 students would be disruptive and would occupy the proctor's attention for that time, leaving open possibilities for cheating. Alternatively, each student could be required to print out their own personalized answer sheet, either in lab or at home, and to bring to the exam class, thereby eliminating distraction and delay at test time.
Students should be able to interrogate the grading of their work, as well as statistical results so their can tell the relative caliber of their work. Such data should be available on line, say, via a file system with access-control lists.