In:
Textile Research Journal, SAGE Publications, Vol. 91, No. 5-6 ( 2021-03), p. 480-495
Abstract:
We propose a method for simulating cloth with meshes dynamically refined according to visual saliency. It is a common belief that it is preferable for the regions of an image being viewed to have more details than others. For a certain scene, a low-resolution cloth mesh is first simulated and rendered into images in the preview stage. Pixel saliency values of these images are predicted according to a pre-trained saliency prediction model. These pixel saliencies are then translated to a vertex saliency of the corresponding meshes. Vertex saliency, together with camera positions and a number of geometric features of surfaces, guides the dynamic remeshing for simulation in the production stage. To build the saliency prediction model, images extracted from various videos of clothing scenes were used as training data. Participants were asked to watch these videos and their eye motion was tracked. A saliency map is generated from the eye motion data for each extracted video frame image. Image feature vectors and map labels are sent to a Support Vector Machine for training to obtain a saliency prediction model. Our method greatly reduces the number of vertices and faces in the clothing model, and generates a speed-up of more than 3 × for scenes with single dressed character, while for multi-character scenes the speed-up is increased to more than 5×. The proposed technique can work together with view-dependency for offline simulation.
Type of Medium:
Online Resource
ISSN:
0040-5175
,
1746-7748
DOI:
10.1177/0040517520944248
Language:
English
Publisher:
SAGE Publications
Publication Date:
2021
detail.hit.zdb_id:
2209596-2
Permalink