Abstract: Highlights•We introduce Cross-View Correction, a novel framework for aligning time-series and textual embeddings in LLM-based time series forecasting.•Cross-attention match module integrates a wavelet-decomposed signals with LLM embeddings to inject rich semantic context into temporal representations.•Graph-based prompt learning module dynamically selects window-specific textual prompts via a sparse similarity graph and GCN refinement.•Contrastive correction applies layer-wise and output-level losses to remove redundancy and further improve cross-modal alignment.•Extensive benchmarks demonstrate that CVC achieves improvements in MAE/MSE over state-of-the-art baselines.
External IDs:doi:10.1016/j.knosys.2025.113957
Loading