Abstract: Point-cloud-based large-scale place recognition is a crucial component for simultaneous localization and mapping (SLAM) and global localization in extensive environments. Most current learning-based methods lack consideration for the density variation of point clouds captured by LiDAR sensors. Dense regions provide richer details, while sparse regions are more vulnerable to noise, leading to less robust feature extraction and lower retrieval accuracy. In this article, we innovatively incorporate point cloud density variation characteristics into place recognition and propose a novel approach, the density-driven adaptive hybrid network (DAH-Net), to generate distinguishable and robust descriptors for reliable localization. First, we design a density-based dynamic local feature aggregation (DDFA) module that dynamically adjusts the neighborhood size for point clouds with varying distributions, ensuring precise local detail extraction. We further propose an efficient contrast-enhanced linear attention (CELA) module, which allocates attention based on point cloud density, enabling the model to capture more discriminative global contextual features for better scene modeling. Meanwhile, we incorporate voxel feature extraction to resist local noise interference and fuse point features into voxel features to compensate for detail loss during voxelization. Finally, to enhance model efficiency, we develop a lightweight model, DAH-Net-L, which reduces computational complexity and model size by downsampling point clouds and reducing the number of channels. Extensive experiments on multiple datasets demonstrate that our method achieves state-of-the-art (SOTA) results, striking an optimal balance among recognition accuracy, model parameters, and inference speed, exhibiting excellent robustness and generalization.