Home >

news ヘルプ

論文・著書情報


タイトル
和文: 
英文:Illumination Normalization-Based Face Detection under Varying Illumination 
著者
和文: Yao Min, 長橋宏, 青木工太.  
英文: Min Yao, HIROSHI NAGAHASHI, Kota Aoki.  
言語 English 
掲載誌/書名
和文: 
英文:IEICE Transactions on Information and Systems 
巻, 号, ページ Vol. E97-D    No. 6    pp. 1590-1598
出版年月 2014年6月1日 
出版者
和文: 
英文: 
会議名称
和文: 
英文: 
開催地
和文: 
英文: 
DOI https://doi.org/10.1587/transinf.E97.D.1590
アブストラクト A number of well-known learning-based face detectors can achieve extraordinary performance in controlled environments. But face detection under varying illumination is still challenging. Possible solutions to this illumination problem could be creating illumination invariant features or utilizing skin color information. However, the features and skin colors are not sufficiently reliable under difficult lighting conditions. Another possible solution is to do illumination normalization (e.g., Histogram Equalization (HE)) prior to executing face detectors. However, applications of normalization to face detection have not been widely studied in the literature. This paper applies and evaluates various existing normalization methods under the framework of combining the illumination normalization and two learning-based face detectors (Haar-like face detector and LBP face detector). These methods were initially proposed for different purposes (face recognition or image quality enhancement), but some of them significantly improve the original face detectors and lead to better performance than HE according to the results of the comparative experiments on two databases. Meanwhile, we propose a new normalization method called segmentation-based half histogram stretching and truncation (SH) for face detection under varying illumination. It first employs Otsu method to segment the histogram (intensities) of the input image into several spans and then does the redistribution on the segmented spans. In this way, the non-uniform illumination can be efficiently compensated and local facial structures can be appropriately enhanced. Our method obtains good performance according to the experiments.

©2007 Institute of Science Tokyo All rights reserved.