README for annotation data from the Word Sense Similarity annotation task:

The annotation data from the task is contained in the file wssim2.ratings.csv. 
This csv file contains the following columns of data:

lexsub_id,sense_id,judgment,user_id,lemma

Each line in the file represents an annotation for a single sense. Since in this
task annotator judgments were recorded for every WordNet sense of the target 
lemma, an annotator's judgments for a given instance are contained on multiple 
lines.

The lexsub_id is listed in the first column, and can be used to locate that 
sentence in the lexsub_wcdata.xml file in the Data directory. The WordNet 
sense being annotated is identified by the column sense_id, and the annotator 
judgment on the applicability of that sense to the given instance (on a 1-5 
scale, ranging from least to greatest) is contained in the "judgment" column. 
The unique id for the annotator providing the rating is listed in the user_id 
column. The lemma being annotated is listed in the column "lemma".  In addition 
to providing the ratings supplied by the three annotators, the file also 
includes the average rating for each sense (for each instance), across all 
annotators. These are indicated by the user id "avg".

Guidelines for the WSsim2 annotation task can be found at:
http://www.dianamccarthy.co.uk/downloads/WordMeaningAnno2012/

The analysis of the data from this task is described in the following
paper:

Katrin Erk, Diana McCarthy and Nicholas Gaylord (2013). Measuring Word Meaning 
in Context. Computational Linguistics, 39 (3) pp 511-554

