In general, annotations are a type of notes that are made on text while reading by highlighting or underlining. Marking of text is considered as error annotations in a machine translation system. Error annotations give information about the translation error classification.
The main focus of this study was to evaluate the graphical user interface of an annotation tool called BLAST, which can be used to perform human error analysis for any language from any machine translation system. The primary intended use of BLAST is for annotation of translation errors.
Evaluation of BLAST mainly focuses on identification of usability issues, understandability and proposal of redesign to overcome issues of usability. By allowing the subjects to explore BLAST, the usage and performance of the tool are observed and later explained.
In this usability study, five participants were involved and they were requested to perform user tasks designed to evaluate the usability of tool. Based on the user tasks required data is collected. Data collection methodology included interviews, observation and questionnaire. Collected data were analyzed both using quantitative and qualitative approaches.
The Participant’s technical knowledge and interest to experiment new interface shows the impact on the evaluation of the tool. The problems faced by individuals while evaluating was found and solutions to overcome those problems were learned.
So finally a redesign proposal for BLAST was an approach to overcome the problems. I proposed few designs addressing the issues found in designing the interface. Designs can be adapted to the existing system or can be implemented new. There is also a chance of doing an evaluation study on interface designs proposed.
Source: Linköping University
Author: Kondapalli, Vamshi Prakash