Keywords: Multi-modal foundation models, Multi-modal LLMs, Multi-omics representation learning, Generative models for biomolecular design
TL;DR: HIP-HOP introduces two geometric invariants (HOP and HOI) for 2D cellular tilings, validated on corneal images, showing higher sensitivity than classical indices and serving as interpretable benchmarks for AI models.
Abstract: We present HIP-HOP, a pair of geometric invariants for two-dimensional cellular tilings.
HOP captures orientational order among neighbors, while HOI quantifies polygonal regularity.
Both are invariant to translation, rotation, and scale, and robust to segmentation noise and null models.
Applied to corneal endothelium, HIP-HOP separates control and PMMA groups more consistently than standard morphometric indices. HIP-HOP thus serves as a clinically relevant descriptor and an interpretable benchmark for representation learning.
Submission Number: 89
Loading