sketch2symm: Symmetry-aware sketch-to-shape generation via semantic bridging
By: Yan Zhou , Mingji Li , Xiantao Zeng and more
Potential Business Impact:
Turns simple drawings into 3D objects.
Sketch-based 3D reconstruction remains a challenging task due to the abstract and sparse nature of sketch inputs, which often lack sufficient semantic and geometric information. To address this, we propose Sketch2Symm, a two-stage generation method that produces geometrically consistent 3D shapes from sketches. Our approach introduces semantic bridging via sketch-to-image translation to enrich sparse sketch representations, and incorporates symmetry constraints as geometric priors to leverage the structural regularity commonly found in everyday objects. Experiments on mainstream sketch datasets demonstrate that our method achieves superior performance compared to existing sketch-based reconstruction methods in terms of Chamfer Distance, Earth Mover's Distance, and F-Score, verifying the effectiveness of the proposed semantic bridging and symmetry-aware design.
Similar Papers
Order Matters: 3D Shape Generation from Sequential VR Sketches
CV and Pattern Recognition
Turns 3D drawings into real shapes faster.
Sketch2PoseNet: Efficient and Generalized Sketch to 3D Human Pose Prediction
CV and Pattern Recognition
Draws 3D human poses from simple drawings.
Symmetria: A Synthetic Dataset for Learning in Point Clouds
CV and Pattern Recognition
Teaches computers to understand 3D shapes better.