Donghoon Lee, Hyunsin Park, Junyoung Chung, Youngook Song, and Chang D. Yoo
Face verification in an uncontrolled environment is a challenging task due to the possibility of large variations in pose, illumination, expression, occlusion, age, scale, and misalignment. To account for these intra-personal settings, this paper proposes a sparsity sharing embedding (SSE) method for face verification that takes into account a pair of input faces under different settings. The proposed SSE method measures the distance between two input faces xA and xB under intrapersonal settings sA and sB in two steps: 1) in the association step, xA and xB is represented in terms of a reconstructive weight vector and identity under settings sA and sB, respectively, from the generic identity dataset; 2) in the prediction step, the associated faces are replaced by embedding vectors that conserve their identity but are embedded to preserve the inter-personal structures of the intra-personal settings. Experiments on a MultiPIE dataset show that the SSE method performs better than the AP model in terms of the verification rate.
1. Donghoon Lee, Hyunsin Park, Junyoung Chung, Youngook Song, and Chang D. Yoo, "Sparsity sharing embedding for face verification" in Proceedings of Asian Conference on Computer Vision, Daejeon, Korea, November 2012.(Posters, 23.2% acceptance rate)