Abstract: Generative models have been widely used in event extraction. However, the interpretability of event extraction has not been fully investigated. In this paper, we propose an Event Extraction framework based on LLM-generated CoT Explanation EE-LCE, which can generate chain-of-thought-style (CoT-style) explanations for events. To this end, we provide each sample of event datasets with an explanation of the reasoning process using a large language model (LLM) GPT-3.5, and fine-tune the Flan-T5 lightweight language model (LM) supervised by the augmented dataset, enhancing both interpretability and performance of the event extraction. Moreover, we use a prefix tree (trie) to normalize the decoding of generative event extraction, i.e. constraint decoding, so that it conforms to expectations. We perform experiments on three benchmark datasets for event extraction. The results of the experiments showcase the robust performance of EE-LCE in event extraction, affirming the effectiveness of both the CoT explanation and the constraint decoding function.