On the synchronization between Hugging Face pre-trained language models and their upstream GitHub repository
Abstract: Pre-trained language models (PTLMs) have transformed natural language processing (NLP), enabling major advances in tasks such as text generation and translation. Similar to software package management, PTLMs are developed using code and environment scripts hosted in upstream repositories (e.g., GitHub), while families of trained model variants are distributed through downstream platforms such as Hugging Face (HF). Despite this similarity, coordinating development and release activities across these platforms remains challenging, leading to misaligned timelines, inconsistent versioning practices, and barriers to effective reuse. To examine how commit activities are coordinated between GitHub and HF, we conducted an in-depth mixed-method study of 325 PTLM families comprising 904 HF model variants. Our findings show that GitHub contributors primarily focus on model version specification, code quality improvements, performance optimization, and dependency management, whereas HF contributors mainly address model documentation, dataset handling, and inference setup. We further analyze synchronization across three dimensions -- lag, type, and intensity -- revealing eight distinct synchronization patterns. The dominance of partially synchronized patterns, such as Disperse and Sparse synchronization, highlights structural disconnects in cross-platform release practices. These disconnects often result in isolated or abandoned updates, increasing the risk of incomplete, outdated, or behaviorally inconsistent models being exposed to end users. Recognizing these synchronization patterns is essential for improving oversight and traceability in PTLM release workflows.
External IDs:dblp:journals/corr/abs-2508-10157
Loading