PackNets: A Variational Autoencoder-Like Approach for Packing Circles in Any Shape

27 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Encoder-decoder, Packing, Neural networks, Arbitrary shapes
Abstract: The problem of packing smaller objects within a larger one has long been of interest. In this work, we employ an encoder-decoder architecture, parameterized by neural networks, for circle packing. Our solution consists of an encoder that takes the index of a circle as input and outputs a point, which is then transformed by a constraint block into a valid center within the outer shape. A perturbation block perturbs this center while ensuring it remains within the corresponding radius, and the decoder estimates the circle's index based on the perturbed center. The functionality of the perturbation block is akin to adding noise to the latent space variables in variational autoencoders (VAEs); however, it differs significantly in both the method and purpose of perturbation injection, as we inject perturbation to push the centers of the circles sufficiently apart. Additionally, unlike typical VAEs, our architecture incorporates a constraint block to ensure that the circles do not breach the boundary of the outer shape. We design the constraint block to pack both congruent and non-congruent circles within arbitrary shapes, implementing a scheduled injection of perturbation from a beta distribution in the perturbation block to gradually push the centers apart. We compare our approach to established methods, including disciplined convex-concave programming (DCCP) and other packing techniques, demonstrating competitive performance in terms of packing density—the fraction of the outer object's area covered by the circles. Our method outperforms the DCCP-based solution in the non-congruent case and approaches the best-known packing densities. To our knowledge, this is the first work to present solutions for packing circles within arbitrary shapes.
Primary Area: optimization
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 11263
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview