Outdoor portrait photographs are often marred by the harsh shadows cast under direct sunlight. To resolve this, one can use post-capture lighting manipulation techniques, but these methods either require complex hardware (e.g., a light stage) to capture each individual, or rely on image-based priors and thus fail to reconstruct many of the subtle facial details that vary from person to person. In this paper, we present SunStage, a system for accurate, individually-tailored, and lightweight reconstruction of facial geometry and reflectance that can be used for general portrait relighting with cast shadows. Our method only requires the user to capture a selfie video outdoors, rotating in place, and uses the varying angles between the sun and the face as constraints in the joint reconstruction of facial geometry, reflectance properties, and lighting parameters. Aside from relighting, we show that our reconstruction can be used for applications like reflectance editing and view synthesis.
SunStage recovers parameters that explain the scene, including geometry, pose, lighting, and material properties.
Full Rendering
Geometry
Diffuse lighting
Albedo
Ambient lighting
Specular
Background
These parameters can then be used to re-render the subject in the input scene:
The original capture
Our rendered face
(composited onto the original image)
Try it yourself: move the slider to scrub through the input video.
We can also edit these recovered parameters to show the subject in different configurations. For example, we can:
Try it yourself: move the slider to adjust the azimuth of the sun.
By virtually changing the size of the sun, we can soften the harsh shadows on the face.
Try it yourself: move the slider to adjust the size of the sun.
To reduce the effect of harsh shadows even further, we can also add a fill light, as they often do in outdoor portrait photography. This helps brighten the shaded part of the face, reducing the contrast between the lit and shaded regions.
Try it yourself: move the slider to adjust the intensity of the fill light.
We can also change the fill light's color, for a more artistic look.
Your browser doesn't support this many videos. Try again on desktop.
Your browser doesn't support this many videos. Try again on desktop.
By combining the above edits, we can virtually adjust the time of day to get that perfect shot at the golden hour.
Try it yourself: move the slider to adjust the time of day.
The recovered parameters can also be used to simulate a one-light-at-a-time (OLAT) sequence, typically captured by a traditional light stage.
Your browser doesn't support this many videos. Try again on desktop.
Your browser doesn't support this many videos. Try again on desktop.
Instead of modifying the scene parameters, we can also replace them, rendering the person in a totally new virtual environment.
Your browser doesn't support this many videos. Try again on desktop.
Your browser doesn't support this many videos. Try again on desktop.
Since we're recovering the geometry of the face, we can also re-render the face from arbitrary novel viewpoints.
Your browser doesn't support this many videos. Try again on desktop.
Your browser doesn't support this many videos. Try again on desktop.
We can also virtually adjust camera parameters like focal length to add or remove perspective distortion from the face.
Try it yourself: move the slider to adjust the focal length of the camera.
To add realistic freckles, makeup, or stickers that realistically interact with scene lighting (like specular reflectoins and shadows), we can edit the skin's albedo.
Your browser doesn't support this many videos. Try again on desktop.
Your browser doesn't support this many videos. Try again on desktop.
To reduce the shininess (or sweatiness) of the subject, we can edit the skin's reflectance properties to reduce the specular coefficients.
Try it yourself: move the slider to adjust the shininess of the face.
@inproceedings{wang2023sunstage,
title={Sunstage: Portrait reconstruction and relighting using the sun as a light stage},
author={Wang, Yifan and Holynski, Aleksander and Zhang, Xiuming and Zhang, Xuaner},
booktitle={Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition},
pages={20792--20802},
year={2023}
}
Special thanks to Marc Levoy and Richard Szeliski for their constructive feedback; Roy Or-El, Daniel Miau, Yoo Zhang, Meredith Wu, Lars Jebe, Zhihao Xia, Yi Zhou, and Florian Kainz for their help in capturing data; Rohit Pandey for his help in running the Total Relighting comparisons; Qixuan Zhang for his help in running the Neural Video Portrait Relighting comparisons; and Xuan Luo for her feedback on the video.
This work was supported by the UW Reality Lab, Adobe, Amazon, Google, Meta, and OPPO.
This website template was borrowed from HyperNeRF. Thanks Keunhong!