Task: 3D Point-Cloud Part Segmentation

You are given point clouds of 3D objects from multiple ShapeNet categories (e.g., Airplane, Chair, Table...). Each object instance is represented as an unordered set of 3D points (x, y, z). The goal is to predict a per-point part label for every point in each test shape.

Public files and structure
- public/train/: training point clouds and their expert‑verified per‑point part labels and optional segmentation visualizations
  - public/train/points/*.pts: ASCII files; each line is a point "x y z"
  - public/train/seg/*.seg: ASCII files; each line is an integer part label for the corresponding point index in the paired .pts file
  - public/train/seg_img/*.png: optional visualizations for some shapes (not required by the metric)
- public/test/: test point clouds (without labels)
  - public/test/points/*.pts
- public/train.csv: rows of "id,category,n_points" for all training samples
- public/test.csv: rows of "id,category,n_points" for all test samples
- public/sample_submission.csv: example file showing the expected format for submissions (ids match test.csv, labels count matches n_points, labels are integers)
- public/description.txt: this file

Important notes
- Files and ids are anonymized; filenames do not reveal categories or labels.
- Do not rely on any ordering of points; point order is arbitrary but consistent between a .pts and its .seg.
- Segmentation labels are non‑negative integers. Label sets differ by category. You are not required to map integers to names; the metric uses the integer ids only.
- All test categories are covered in the training set, and, within each category, test shapes only use part labels that occur in training.

Evaluation
- Primary metric: mean IoU (intersection over union) averaged per test shape. For each test shape, we compute IoU over the union of labels present in either the prediction or the ground truth for that shape; we then average across shapes. Higher is better.
- Submission format: a CSV with header "id,labels". For each test id, "labels" is a space‑separated list of integers, one per point (length must equal n_points in public/test.csv). Example:
  id,labels
  0101234,0 0 1 1 1 2 0 0 ...

Tip: Always verify the length of each predicted label list equals the n_points for that id, and ensure all values are integers. Non‑matching lengths or malformed rows will invalidate your submission.
