Video decolorization is to filter out the color information while preserving the perceivable content in the video as much and correct as possible. Existing methods mainly apply image decolorization strategies on videos, which may be slow and produce incoherent results. In this paper, we propose a video decolorization framework that considers frame coherence and saves decolorization time by referring to the decolorized frames. It has three main contributions. First, we define decolorization proximity to measure the similarity of adjacent frames. Second, we propose three decolorization strategies for frames with low, medium, and high proximities, to preserve the quality of these three types of frames. Third, we propose a novel decolorization Gaussian mixture model to classify the frames and assign appropriate decolorization strategies to them based on their decolorization proximity. To evaluate our results, we measure them from three aspects: 1) qualitative; 2) quantitative; and 3) user study. We apply color contrast preserving ratio and C2G-SSIM to evaluate the quality of single frame decolorization. We propose a novel temporal coherence degree metric to evaluate the temporal coherence of the decolorized video. Compared with current methods, the proposed approach shows all around better performance in time efficiency, temporal coherence, and quality preservation. Copyright © 2017 IEEE.
CitationTao, Y., Shen, Y., Sheng, B., Li, P., & Lau, R. W. H. (2018). Video decolorization using visual proximity coherence optimization. IEEE Transactions on Cybernetics, 48(5), 1406-1419. doi: 10.1109/TCYB.2017.2695655
- Decolorization Gaussian mixture model (DC-GMM) classifier
- Decolorization proximity
- Result reutilization
- Temporal coherence
- Video decolorization