Portrait Reconstruction and Relighting
using the Sun as a Light Stage

CVPR 2023

1University of Washington, 2Adobe Inc.

Our method, SunStage, can reconstruct detailed geometry, lighting and reflectance properties from just a single selfie video of the subject rotating under the sun. This reconstructed information can be used to render the subject under new lighting conditions or in different poses.

Abstract

Outdoor portrait photographs are often marred by the harsh shadows cast under direct sunlight. To resolve this, one can use post-capture lighting manipulation techniques, but these methods either require complex hardware (e.g., a light stage) to capture each individual, or rely on image-based priors and thus fail to reconstruct many of the subtle facial details that vary from person to person. In this paper, we present SunStage, a system for accurate, individually-tailored, and lightweight reconstruction of facial geometry and reflectance that can be used for general portrait relighting with cast shadows. Our method only requires the user to capture a selfie video outdoors, rotating in place, and uses the varying angles between the sun and the face as constraints in the joint reconstruction of facial geometry, reflectance properties, and lighting parameters. Aside from relighting, we show that our reconstruction can be used for applications like reflectance editing and view synthesis.

Video


Applications & Demos

SunStage recovers parameters that explain the scene, including geometry, pose, lighting, and material properties.

Full Rendering

Geometry

Diffuse lighting

Albedo

Ambient lighting

Specular

Back​ground


These parameters can then be used to re-render the subject in the input scene:

Re-render the input sequence

The original capture

Our rendered face
(composited onto the original image)

Try it yourself: move the slider to scrub through the input video.

We can also edit these recovered parameters to show the subject in different configurations. For example, we can:

Change the sun direction

Try it yourself: move the slider to adjust the azimuth of the sun.

Add a fill light

To reduce the effect of harsh shadows even further, we can also add a fill light, as they often do in outdoor portrait photography. This helps brighten the shaded part of the face, reducing the contrast between the lit and shaded regions.

Try it yourself: move the slider to adjust the intensity of the fill light.

Change the color of the light

We can also change the fill light's color, for a more artistic look.

Your browser doesn't support this many videos. Try again on desktop.

Your browser doesn't support this many videos. Try again on desktop.

Simulate an OLAT capture

The recovered parameters can also be used to simulate a one-light-at-a-time (OLAT) sequence, typically captured by a traditional light stage.

Your browser doesn't support this many videos. Try again on desktop.

Your browser doesn't support this many videos. Try again on desktop.

Render novel views

Since we're recovering the geometry of the face, we can also re-render the face from arbitrary novel viewpoints.

Your browser doesn't support this many videos. Try again on desktop.

Your browser doesn't support this many videos. Try again on desktop.

Edit the face albedo

To add realistic freckles, makeup, or stickers that realistically interact with scene lighting (like specular reflectoins and shadows), we can edit the skin's albedo.

Your browser doesn't support this many videos. Try again on desktop.

Your browser doesn't support this many videos. Try again on desktop.


BibTeX

@inproceedings{wang2023sunstage,
  title={Sunstage: Portrait reconstruction and relighting using the sun as a light stage},
  author={Wang, Yifan and Holynski, Aleksander and Zhang, Xiuming and Zhang, Xuaner},
  booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
  pages={20792--20802},
  year={2023}
}

Acknowledgements

Special thanks to Marc Levoy and Richard Szeliski for their constructive feedback; Roy Or-El, Daniel Miau, Yoo Zhang, Meredith Wu, Lars Jebe, Zhihao Xia, Yi Zhou, and Florian Kainz for their help in capturing data; Rohit Pandey for his help in running the Total Relighting comparisons; Qixuan Zhang for his help in running the Neural Video Portrait Relighting comparisons; and Xuan Luo for her feedback on the video.

This work was supported by the UW Reality Lab, Adobe, Amazon, Google, Meta, and OPPO.

This website template was borrowed from HyperNeRF. Thanks Keunhong!