Abstract
Depth of Field is an important factor to synthesize realistic photography effects. In our paper, we discuss a new image-based approach that can synthesize DoF effects without user iterations. Our approach can produce the based saliency maps with realtime performance. In particular, there is not depth information needed to capture from cameras. The depth information is approximated by using saliency maps. In particular, we take advantages of the flash non-flash image pairs to refine the DoF synthesis quality. The focused regions can be segmented using GrabCut. The experimental results show that the our image based DoF synthesis can simulate high-quality DoF effects efficiently. Copyright © 2014 the Owner/Author.
Original language | English |
---|---|
Title of host publication | Proceedings VRCAI 2014: The 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry |
Place of Publication | New York |
Publisher | Association for Computing Machinery |
Pages | 133-140 |
ISBN (Print) | 9781450332545 |
DOIs | |
Publication status | Published - 2014 |
Citation
Zhang, X., Xue, W., Wang, Z., Sheng, B., Ma, L., & Li, P. (2014). Saliency-based depth-of-field rendering with flash and non-flash pairs. In Proceedings VRCAI 2014: The 13th ACM SIGGRAPH International Conference on Virtual-Reality Continuum and its Applications in Industry (pp. 133-140). New York: Association for Computing Machinery.Keywords
- Constant timeDepth-of-Field
- Global illumination
- Image defocus
- Matting
- Radiosity
- Saliency