A Dataset and Benchmark for Mesh Parameterization

Georgia Shay, MIT

Justin Solomon, MIT

Oded Stein, MIT

Figure 1. A tiled representation of our dataset. The meshes and their UV maps were created by digital artists. They are representative of the the challenges that parameterization algorithms face in practice. Using our benchmark, the artist-provided UV maps can be directly compared to the UV maps computed by an automatic parameterization method.

Abstract

UV parameterization is a core task in computer graphics, with applications in mesh texturing, remeshing, mesh repair, mesh editing, and more. It is thus an active area of research, which has led to a wide variety of parameterization methods that excel according to different measures of quality. There is no single metric capturing parameterization quality in practice, since the quality of a parameterization heavily depends on its application; hence, parameterization methods can best be judged by the actual users of the computed result. In this paper, we present a dataset of meshes together with UV maps collected from various sources and intended for real-life use. Our dataset can be used to test parameterization methods in realistic environments. We also introduce a benchmark to compare parameterization methods with artist-provided UV parameterizations using a variety of metrics. This strategy enables us to evaluate the performance of a parameterization method by computing the quality indicators that are valued by the designers of a mesh.

Acknowledgements

This work is supported by the Swiss National Science Foundation’s Early Postdoc.Mobility fellowship.

The MIT Geometric Data Processing group acknowledges the generous support of Army Research Office grants W911NF2010168 and W911NF2110293, of Air Force Office of Scientific Research award FA9550-19-1-031, of National Science Foundation grants IIS-1838071 and CHS-1955697, from the CSAIL Systems that Learn program, from the MIT--IBM Watson AI Laboratory, from the Toyota–CSAIL Joint Research Center, from a gift from Adobe Systems, from an MIT.nano Immersion Lab/NCSOFT Gaming Program seed grant, and from a Google Research Scholar award.

We thank Mazdak Abulnaga, Yu Wang, and Lingxiao Li for help with proofreading.