Vol.-CVIM-7 No.7 /3/8 RANSAC M Subspace fitting via robust Jacobian kernel PCA Jun Fujiki and Shotaro Akaho The subspace fitting method based on the original kernel principle component analysis (PCA), which minimizes the square distance in feature space, sometimes derives bad estimation because it does not reflect the metric on input space. Then authors proposed the subspace fitting method based on the kernel PCA based on the metric on input space, which is called Jacobian kernel PCA. However, Jacobian kernel PCA has the sensitivity against noise more than original kernel PCA because it directly reflects the metric on input space. Then in this paper, the authors propose robust subspace fitting methods based on Jacobian kernel PCA. such as the outlier detection method based on random sampling consensus (RANSAC) and χ test, the robust method based on M-estimation and the robust method based on least absolute value estimation.. principal component analysis PCAPCA PCA least squares PCA National Institute of Advanced Industrial Science and Technology nonlinear PCA NLPCA NLPCA feature space feature map PCA ),4) NLPCA c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8 NLPCA kernel PCA KPCA 4),) NLPCA KPCA NLPCA KPCA principle curve principle surface KPCA ) ),),6),8),),3) ) Jacobian KPCA PCA ) PCA Jacobi equi-metric curve outlier M ) M M M Robust PCA R-PCA M random sample consensus RANSAC 7) χ ) M log cosh M Huber PCA M least absolute value estimation. m R n H H H KPCA 3) PCA ). m R R x G x {x [d] } D d= PCA x [d] G x[d] x = x [d] + δx x x [d] r p (r p ) = (δx) G x[d] (δx) x n H ϕ : x ϕ(x) ϕ J ϕ c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8 ϕ(x) ϕ [d] + J ϕ[d] (x x [d] ) ϕ [d] = ϕ(x [d] ) (r p ) = (δϕ(x)) G ϕ[d] δϕ(x) ϕ(x + δx) = ϕ(x) + δϕ G ϕ[d] = (J + ϕ [d] ) G x[d] J + ϕ [d], G ϕ [d] = J ϕ[d] G x [d] Jϕ [d] X + X. ϕ [d] = ϕ(x [d] ) n a ϕ + b = D (a ϕ [d] + b) E(a) = () a G ϕ [d] a d= a b ) ( ) ( ) ( ) ϕ a G ϕ[d] ϕ =, ã =, Gϕ[d] = b n + n ã ϕ = ] D ã [ ϕ[d] ϕ [d] ã E(ã) = () ã G+ ϕ [d] ã d= ã ),6),3) ().3 k(x, y) = ϕ(x) ϕ(y) k(x, y) (m ) = k(x, y) x = J ϕ (x) ϕ(y) Jacobian kernel a ϕ[d] a = α [d] ϕ[d] = Φ α (n ) p (n D) (D ) D D K (K) ij = k(x [i], x [j] ), ) K = (K [] K [D] D m K [d] k [i][j] = k(x [i], x [j] ), ) K [d] = (k [d][] k [d][d] K [d] = Φ ϕ[d], K [d] = Φ J ϕ[d] () ϕ D α K E [d] K [d] (α) = α α K [d] G+ x[d] K [d] α (3) d= α ( ) G x[d] G x[d] =.4 (3) ) α α µ [d] = α K [d] G+ x[d] K [d] α, Λ = diag { µ [],..., µ [D] } E (α) α KΛ Kα α = MinEigenVec [ KΛ K ] MinEigenVec[X] X [k] k [] () µ [] d =,..., D [d] () (a) (b) (a) α [k+] = MinEigenVec[K(Λ [k] ) K] (b) µ [k+] [d] = ( α [k+] ) K [d] G + x [d] K [d] ( α[k+] ) {µ [d] } D d=... (KPCA) {µ [] [d] = }D d= ) E (α) α K α α = MinEigenVec[K ].. (Euclideanization) 8),) E (α) α ( KD m K ) α, D = diag { Det G ϕ[],..., Det G ϕ[d] } Det G ϕ[d] = det { J + ϕ [d] (J + ϕ [d] ) } det G x[d] Det X X α = MinEigenVec[KD m K] 3 c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8..3 Taubin 3) E[G kernel [d] ] = D K d= [d]g + x [d] K [d] E α K α (α) ) α (E[G kernel ] α [d] α ( ) K α = λ E[G kernel α [d] ].6 KPCA K = KK PCA K(Λ [ ] ) K ϕ[d] (µ [ ] [d] ) = K [d]α G + x [d] PCA x [d] ϕ[d] ϕ[d] 3. 3. 7 y = x x [ 3, 3].8.3.3 7 3. KPCA PCA % KPCA PCA µ λ λ exp ( λ x µ ) 3 3 3 3 KPCAPCA xy y x xy PCA 4. 4. PCA r [d] r [d] α K [d] K [d] α α K [d] G+ x[d] K [d] α (4) α PCA α K [d] K [d] R [d] = α α K [d] G+ x[d] K [d] α () PCA 4. RANSAC RANSAC 7) () () (3) (4) 4 c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8 () α m m β k k log ( β) log { ( α) m }. (6) 4.3 MAD median absolute deviation MAD {x n } N n= mad [x n] = med [ x n med (x n) ] x n N(, σ ) σ χ. () mad [xn].486 med [ x ] n 9) χ α(l) L χ α% L {x n} N n= x n N( L, σ I L ) ] σ [ x χ. (L) med n (7) m n n k L = n k m N( L, σ I L ) mad σ 4.4 RANSAC χ RANSAC χ ) PCA σ 4.4. () RANSAC (6) () (7) σ 4.4. Robust PCA R-PCA () n {Φ(x [d] )} D d= {Φ(y [m] )} m= PCA α () R [d] R [d] < σ S () () S ( γ)% R [d] χ γ(l) σ (3) PCA. M M min D d= R [d] x x = x > ρ(x) min D d= ρ(r [d]) ρ(x) = x M ρ ρ influence function Ψ(x) = ρ(x) x ρ Ψ (a) (b) M log cosh w(x) = Ψ(x) x weight function Ψ tanh x + o(x ) ρ(x) = σ log cosh x σ = x + σ x4 + o(x 4 ) (σ ) Ψ(x) = σ tanh x σ, w(x) = σ x tanh x σ x w(x) = Ψ(x) x x w(x) = Ψ(x) x c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8 (d) (e) (f) ρ Ψ w 4 4 3 3 ρ( x) - - (a) ρ(x) = x / 4 4 3 3 ρ( x) - - M (d) ρ(x) = σ log cos x σ ψ( x) - - - - - (b) Ψ(x) = x ψ( x) 6. - - - (e) Ψ(x) ( = ) x σ tanh σ M.9 w( x).8.7.6..4.3...7.6..4.3.. -σ σ - - (c) w(x) =.9 w( x).8 -σ σ - - (f) w(x) ( = ) σ x x tanh σ least absolute value estimation least absolute deviations M ρ(x) = x ψ(x) = sign(x) x = ρ(x) = x x x + ε ε 7. R [d] RANSAC RS- KPCA PCA M +M M M +MJ 3 = RANSAC KPCA M RS-+MJ 7. % 3% % % 6 % 3 3 RS- 4 RS- +M 3 +MJ % KPCA RANSAC RS 3 3 4 RS 3 3 4 +M +M RS +M RS +M +MJ +MJ RS +MJ RS +MJ 3 % KPCA RANSAC PCA RANSAC M RANSAC PCA 3% 4 3 RS- 4 RS- +M 3 +MJ 3% KPCA RANSAC 6 c Information Processing Society of Japan
情報処理学会研究報告 Vol.-CVIM-7 No.7 /3/8 +M +MJ 7. 3% D R D d= [d] 6 RANSAC RS 3 3 4 RS 3 3 4 +M RS +M RS +M +MJ RS +MJ RS +MJ...4.6.8 6 L error +M +MJ +M +MJ RS RS +M RS +MJ RS RS +M RS +MJ 4 3% PCA RANSAC RANSAC % RS- +M 3 +MJ PCA PCA 7 +MJ RANSAC RANSAC 7 L error of L error of RS RS +M RS +M +MJ RS +MJ.......3.4........3.4. % % PCA RANSAC.... L error of +MJ...3.4..... L error of RS +MJ...3.4. R [d] M 7 RANSAC 7 c Information Processing Society of Japan
Vol.-CVIM-7 No.7 /3/8 PCA 8. RANSAC 8 RANSAC 8 6 4 - - 8 6 4 - - 8 PCA R-PCA % ) M.Aizerman, É.Braverman and L.Rozonoér, Theoretical foundations of the potential function method in pattern regognition learning, Automation and Remote Control, :8-837, 964. ) S. Akaho, Curve fitting that minimizes the mean square of perpendicular distances from sample points, SPIE, Vision Geometry II, 993. 3),, D-II, J86-D- II(7):934-94, 3. 4), - -,, 8. ) W.Chojnacki, M.j.Brooks, A.vandenHangel and D. Gawley, On the fitting of surface to data with covariances, IEEE TPAMI, ():94-33,. 6) C.Ding, D.Zhou, X.He and H.Zha, R -PCA: Rotational invariant L -norm principal component analysis for robust subspace factorizaion, In Proc. of ICML6. 7) M.A.Fischer and R.C.Bolles, Random sample consensus: A paradigm for model fitting with applications to image analysis and automated cartography, Comm. ACM,4:38-39, 98 8),,, PRMU4-49:9-96, 4(). 9) J. Fujiki and S. Akaho, Spherical PCA with Euclideanization, Subspace 7(ACCV7). ),,, Subspace 8(MIRU8). ),,, Nov.8. ),,,,,, Mar.9. 3) N.H.Gray, P.A.Geiser and J.R.Geiser, On the least-squares fit of small and great circles to spherically projected orientation data, Mathematical Geology, (3):73-84, 98. 4) R.Hartley and A.Zisserman. Multiple view geometry in computer vision. Cambridge University, Cambridge, nd edition, 3. ) P. J. Huber, Robust estimation of a location parameter, Ann. Math. Stat., 3:73-, 964. 6),,,, 8-CVIM- 64-3:7-4, 8. 7) Q.Ke and T.Kanade, Robust L norm factorization in the presence of outliers and missing data by alternative convex programming, In Proc. of CVPR4, pp.9-99, 4. 8) Y. Leeden and P. Meer, Heteroscedastic regression in computer vision: problems with bilinear constraint, IJCV, 37():7-,. 9) R.J.Rousseeuw and A.M.Leroy, Robust Regression and Outlier Detection, John Wiley & Sons, NY, 987. ) P.D.Sampson, Fitting conic sections to very scattered data: an iterative refinement of the Bookstein algorithm, Comput. Vision, Graphics, and Image Processing, 8:97-8, 98. ) B.Schölkopf, A.Smola and K.-R.Müller, Nonlinear component analysis as a kernel eigenvalue problem, Neural Computation, :99-39, 998. ) Y.Sugaya and K.Kanatani, Outlier removal for motion tracking by subspace separation, IEICE Trans. Inf.&Syst., Jun.3. 3) G.Taubin, Estimation of planar curves, surfaces, and nonplanar space curves defined by implicit equations with applicatons to edge and range image segmentation, IEEE TPAMI, 3():-38, 99. 4) V.Vapnik, The Nature of Statistical Learning Theory, Springer-Verlag, 99. 8 c Information Processing Society of Japan