README for annotation data from the Word Sense Best annotation task:

The annotation data from the task for all individuals is contained in the file 
wsbestratings.csv. This  csv file contains the following columns of data:

lexsub_id,sense_id,judgment,user_id,lemma

Each line in the file represents an annotation for a single sense for a given 
occurrence. The occurrence is identified by the lexsub_id in the first column, 
and can be used to locate that sentence in the lexsub_wcdata.xml file in the 
Data directory. The sense being annotated is identified by the column sense_id. 
The judgment column indicates the binary choice 'y/n' (yes/no) for the selection
of that sense for that occurrence. Like traditional word sense disambiguation 
annotation, the annotators are asked to indicate the single best sense 
applicable for a given context given the definitions. They can provide  
more than one sense if they feel the senses selected are equally 
valid. The unique id for the annotator providing the rating is listed in the 
user_id column. The lemma being annotated is listed in the column "lemma".
  
In addition to providing the ratings supplied by the eight individual 
annotators, we provide the mode (most frequent from all the annotators) sense 
for each occurrence in the file wsbestmode.csv. Where there is more than one 
sense selected with the same frequency by the annotators, all such senses are 
concatenated with "_".

Guidelines for the WSbest annotation task can be found at:
http://www.dianamccarthy.co.uk/downloads/WordMeaningAnno2012/

The analysis of the data from this task (WSbest) is described in the following
paper:

Katrin Erk, Diana McCarthy and Nicholas Gaylord (2013). Measuring Word Meaning 
in Context. Computational Linguistics, 39 (3) pp 511-554

