Abstract: Highlights•We focus on embedding representation in neural networks with efficient compression.•Quantum-like neural network with Hierarchical Entanglement Embedding is proposed.•Intra-word EE aggregates the constituent morphemes from multiple perspectives.•Inter-word EE captures N-gram compound semantics based on unitary transformation.•Our space-efficient QHEE outperforms several baselines with efficient compression.
Loading