Deep Learning with Solid Models
We have a mini-batch of papers at CVPR 2021 focused on deep learning with solid models. The objects around us, chairs, cars, devices etc., are almost always designed as solid models in CAD, but surprisingly few deep learning approaches have been used... until now. Solid models use the boundary representation, or B-Rep. Think of a B-Rep as a watertight mesh where the faces don’t need to be flat and can be trimmed to any shape. These faces are ‘glued’ together, with the topology forming a well structured graph.
In UV-Net (Paper, Code) we use this graph with a GNN and sample a point grid on each face for use with a CNN. This combination outperforms mesh and point cloud methods on both classification and segmentation.
In BRepNet (Paper, Code) we show that convolutional kernels can be applied with respect to oriented coedges to outperform both network and heuristic methods on segmentation.
Along with BRepNet we also release a new segmentation dataset with over 35k CAD models in B-Rep, mesh, and point cloud representations.
So if you have access to solid model data, it can really help with classification and segmentation. But what about other tasks? One long sought after goal in CAD is to reverse engineer a CAD model when the modeling history is not available. We tackle this problem with a new representation, the Zone Graph (Paper, Code), where each zone is a solid region formed by extending all B-Rep faces and partitioning space with them.
For Zone Graphs we train on the Fusion 360 Gallery reconstruction dataset (Paper, Code) containing human designed CAD modeling sequences. We will present this work at SIGGRAPH 2021.
Finally, in a CVPR workshop paper we explore how we might synthesize solid models by first generating plausible engineering sketches using transformers (Paper).
Exciting times in CAD land! 🎉 Many thanks to all co-authors and colleagues at Autodesk Research and beyond 👏🏽
Nice!
Awesome solution to potentially impactful problems for manufacturing and design industries.