Abstract: Ultrasound is an important diagnostic modality in medicine, offering real-time imaging, no radiation and low cost. However, ultrasound is currently highly dependent on the operator's experience and technical skills. Robotic autonomous ultrasound scanning (RAUS) is a sequential decision-making problem, requiring continuous decisions based on the current state and environment. Recently, reinforcement learning (RL) has made significant progress in solving such challenges across various domains. Nevertheless, most studies directly use raw ultrasound images as input to end-to-end networks. The noise and high-dimensional features in these images increase both network complexity and the number of parameters. In this letter, we propose a tissue-view map representation to facilitate model-free deep reinforcement learning for robotic carotid artery scanning. The tissue-view map captures the interaction between the probe and the skin, highlighting the scanned object while considering the surrounding tissues. A variational autoencoder is then employed to encode the features of the tissue-view map and further reduce dimensionality. Finally, we adopted proximal policy optimization to learn the policy for probe adjustment in carotid artery scanning. Our experiments demonstrate that the proposed method outperforms existing approaches and effectively handles the tasks of object search, contact control, and image quality optimization in real-world scenarios.