Abstract: Sketch-guided point cloud reconstruction aims to provide an efficient and flexible pathway to generate plausible 3D shapes automatically from a free-hand sketch shaping the modeling intentions of end users. However, such a task is still in its infancy due to the complex, challenging, and highly variable patterns of sketches by nature. In this paper, we present a novel sketch-guided framework based on free-form deformation (FFD) for 3D point cloud generation. To capture sufficient meaningful features from a sketch, a dual-branch encoding architecture is devised to extract complementary semantic and geometric clues by formulating the input as a binary image and a 2D point cloud, respectively. The proposed encoder also learns useful features to guide content generation from a template point cloud before decoding the resultant global features into a set of control points for the use of FFD. We have also developed a large and diverse manually collected dataset, Sketch-3DPC, in which there are a total of 13,754 sketch and 3D point cloud pairs categorized into 11 classes. Both qualitative and quantitative experiment results demonstrate the superiority of the proposed methodology and dataset.