Yoshihiro Kanamori†, Yuki Endo†‡
†University of Tsukuba, ‡Toyohashi University of Technology
SIGGRAPH Asia 2018
Relighting of human images has various applications in image synthesis. For relighting, we must infer albedo, shape, and illumination from a human portrait. Previous techniques rely on human faces for this inference, based on spherical harmonics (SH) lighting. However, because they often ignore light occlusion, inferred shapes are biased and relit images are unnaturally bright particularly at hollowed regions such as armpits, crotches, or garment wrinkles. This paper introduces the first attempt to infer light occlusion in the SH formulation directly. Based on supervised learning using convolutional neural networks (CNNs), we infer not only an albedo map, illumination but also a light transport map that encodes occlusion as nine SH coefficients per pixel. The main difficulty in this inference is the lack of training datasets compared to unlimited variations of human portraits. Surprisingly, geometric information including occlusion can be inferred plausibly even with a small dataset of synthesized human figures, by carefully preparing the dataset so that the CNNs can exploit the data coherency. Our method accomplishes more realistic relighting than the occlusion-ignored formulation.
Keywords: inverse rendering, light transport, convolutional neural network
The authors would like to thank ZOZO Technologies, Inc. for generous financial support throughout this project, without which this work was not possible. The authors would also like to thank the anonymous referees for their constructive comments, and Ms. Sina Kitz for proof-reading the final version of this paper. For our accompanying video, input images courtesy of Kat Garcia, Kinga Cichewicz, George Gvasalia, and Jacob Postuma.
Last modified: 10 Oct, 2018[back]