Abstract: A post embedding (representation of text in embedding space that
effectively captures semantic meaning) is a foundational component of LinkedIn that is consumed by product surfaces in retrieval
and ranking (e.g., ranking posts in the feed or video tab). This paper
presents the post embeddings used at LinkedIn, where a pre-trained
transformer-based large language model (LLM) is taken as input
and fine-tuned using multi-task learning across a diverse set of
semantic labeling tasks. We observe positive transfer, leading to
improved performance across all tasks, compared to training them
independently. The generated post embeddings outperform baseline models in zero-shot learning, demonstrating its potential for
broader applicability. Furthermore, the generated post embeddings’
performance surpasses that of OpenAI’s ADA-001 and ADA-002 embeddings on LinkedIn specific datasets and tasks. We also describe
the offline evaluation methodology and the deployment to our nearline infrastructure, which makes the post embedding available for
use within minutes of post creation for any downstream application.
We present how the embeddings were applied in the Feed product
surface, in both ranking and retrieval stages, and showcase the real
world online impact to demonstrate the superior performance of
these embeddings. Finally, we also share the results of applying the
embeddings to the retrieval system of our video ranking product
surface in LinkedIn. These embeddings have been battle-tested in
production at LinkedIn for over two years, consistently powering
multiple products
Loading